Test Report: QEMU_macOS 18427

                    
                      190844ee5aebf41cade975daf7bc7fe77d6b0ce4:2024-03-18:33631
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.68
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.98
36 TestAddons/Setup 10.24
37 TestCertOptions 10.15
38 TestCertExpiration 197.43
39 TestDockerFlags 12.36
40 TestForceSystemdFlag 9.98
41 TestForceSystemdEnv 10.2
47 TestErrorSpam/setup 9.86
56 TestFunctional/serial/StartWithProxy 9.9
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.05
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 115
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.65
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 21.89
150 TestMultiControlPlane/serial/StartCluster 9.86
151 TestMultiControlPlane/serial/DeployApp 87.42
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.39
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.95
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 3.96
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.92
174 TestJSONOutput/start/Command 9.73
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.22
206 TestMountStart/serial/StartWithMountFirst 10.66
209 TestMultiNode/serial/FreshStart2Nodes 9.88
210 TestMultiNode/serial/DeployApp2Nodes 103.63
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.1
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 46.41
218 TestMultiNode/serial/RestartKeepsNodes 7.34
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 4.11
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.38
226 TestPreload 9.98
228 TestScheduledStopUnix 10.14
229 TestSkaffold 16.81
232 TestRunningBinaryUpgrade 662.22
234 TestKubernetesUpgrade 17.31
248 TestStoppedBinaryUpgrade/Upgrade 619.11
258 TestPause/serial/Start 10
261 TestNoKubernetes/serial/StartWithK8s 9.96
262 TestNoKubernetes/serial/StartWithStopK8s 7.47
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.09
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.55
265 TestNoKubernetes/serial/Start 5.97
269 TestNoKubernetes/serial/StartNoArgs 7.28
271 TestNetworkPlugins/group/auto/Start 9.91
272 TestNetworkPlugins/group/kindnet/Start 9.82
273 TestNetworkPlugins/group/calico/Start 9.86
274 TestNetworkPlugins/group/custom-flannel/Start 9.92
275 TestNetworkPlugins/group/false/Start 9.85
276 TestNetworkPlugins/group/enable-default-cni/Start 9.82
277 TestNetworkPlugins/group/flannel/Start 9.85
278 TestNetworkPlugins/group/bridge/Start 9.85
279 TestNetworkPlugins/group/kubenet/Start 9.88
281 TestStartStop/group/old-k8s-version/serial/FirstStart 10.15
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
290 TestStartStop/group/old-k8s-version/serial/Pause 0.11
292 TestStartStop/group/no-preload/serial/FirstStart 9.84
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.27
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.11
303 TestStartStop/group/embed-certs/serial/FirstStart 9.97
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
308 TestStartStop/group/embed-certs/serial/SecondStart 5.25
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.92
316 TestStartStop/group/newest-cni/serial/FirstStart 10.08
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/newest-cni/serial/SecondStart 5.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-305000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-305000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.680049208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c55009e6-35aa-4636-a9e0-6735b93f4e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-305000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"38370e31-fc25-4df9-9248-4a160c582e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18427"}}
	{"specversion":"1.0","id":"d33671d1-3876-4249-bbac-42fd5c3fa7e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig"}}
	{"specversion":"1.0","id":"6072dc2b-cd7e-4c95-816a-53b049a12ced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"698c28ee-2496-4c04-9142-f2dd04ff1335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62fa7ceb-d27f-4324-9aec-c9f82bc9c158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube"}}
	{"specversion":"1.0","id":"06e44448-0e19-43be-ad93-7743fd840d5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0ee27904-bc71-4b8b-8830-01472140db61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1ecf586-6382-4e5b-be9f-a945ba2066da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2ea29942-1d7f-4502-8583-66c984ebb9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7eb5613e-648a-4d53-bc20-86288fc6e283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-305000\" primary control-plane node in \"download-only-305000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e412fc2b-7c85-41fa-a1cc-e05aa62adcaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"13cf9357-61e5-4f2c-bce9-3c8b366542d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520] Decompressors:map[bz2:0x140007daa28 gz:0x140007daab0 tar:0x140007daa60 tar.bz2:0x140007daa70 tar.gz:0x140007daa80 tar.xz:0x140007daa90 tar.zst:0x140007daaa0 tbz2:0x140007daa70 tgz:0x1
40007daa80 txz:0x140007daa90 tzst:0x140007daaa0 xz:0x140007daab8 zip:0x140007daac0 zst:0x140007daad0] Getters:map[file:0x140006c8c70 http:0x14000568230 https:0x14000568280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d24d8eb1-030a-4f76-9faa-b3577df23a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:48:01.115369   19928 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:48:01.115504   19928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:01.115507   19928 out.go:304] Setting ErrFile to fd 2...
	I0318 04:48:01.115510   19928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:01.115641   19928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	W0318 04:48:01.115725   19928 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18427-19517/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18427-19517/.minikube/config/config.json: no such file or directory
	I0318 04:48:01.116985   19928 out.go:298] Setting JSON to true
	I0318 04:48:01.134790   19928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10054,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:48:01.134855   19928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:48:01.139982   19928 out.go:97] [download-only-305000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:48:01.142858   19928 out.go:169] MINIKUBE_LOCATION=18427
	I0318 04:48:01.140141   19928 notify.go:220] Checking for updates...
	W0318 04:48:01.140201   19928 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 04:48:01.151865   19928 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:48:01.155873   19928 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:48:01.159886   19928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:48:01.162982   19928 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	W0318 04:48:01.168915   19928 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:48:01.169145   19928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:48:01.171883   19928 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:48:01.171903   19928 start.go:297] selected driver: qemu2
	I0318 04:48:01.171918   19928 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:48:01.172024   19928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:48:01.174827   19928 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:48:01.181141   19928 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:48:01.181250   19928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:48:01.181348   19928 cni.go:84] Creating CNI manager for ""
	I0318 04:48:01.181368   19928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:48:01.181417   19928 start.go:340] cluster config:
	{Name:download-only-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:48:01.186215   19928 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:48:01.190753   19928 out.go:97] Downloading VM boot image ...
	I0318 04:48:01.190771   19928 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 04:48:19.138290   19928 out.go:97] Starting "download-only-305000" primary control-plane node in "download-only-305000" cluster
	I0318 04:48:19.138315   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:19.458724   19928 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:48:19.458803   19928 cache.go:56] Caching tarball of preloaded images
	I0318 04:48:19.460492   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:19.466322   19928 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 04:48:19.466346   19928 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:20.068677   19928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:48:39.000995   19928 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:39.001150   19928 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:39.698964   19928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:48:39.699162   19928 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-305000/config.json ...
	I0318 04:48:39.699180   19928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-305000/config.json: {Name:mka42895365f71bc1505c7c59e512495f624655a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:48:39.699391   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:39.699580   19928 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 04:48:40.712926   19928 out.go:169] 
	W0318 04:48:40.717963   19928 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520] Decompressors:map[bz2:0x140007daa28 gz:0x140007daab0 tar:0x140007daa60 tar.bz2:0x140007daa70 tar.gz:0x140007daa80 tar.xz:0x140007daa90 tar.zst:0x140007daaa0 tbz2:0x140007daa70 tgz:0x140007daa80 txz:0x140007daa90 tzst:0x140007daaa0 xz:0x140007daab8 zip:0x140007daac0 zst:0x140007daad0] Getters:map[file:0x140006c8c70 http:0x14000568230 https:0x14000568280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 04:48:40.717989   19928 out_reason.go:110] 
	W0318 04:48:40.726764   19928 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:48:40.730916   19928 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-305000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-969000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-969000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.807747792s)

                                                
                                                
-- stdout --
	* [offline-docker-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-969000" primary control-plane node in "offline-docker-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:01:00.258029   21487 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:01:00.258181   21487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:00.258184   21487 out.go:304] Setting ErrFile to fd 2...
	I0318 05:01:00.258187   21487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:00.258312   21487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:01:00.259395   21487 out.go:298] Setting JSON to false
	I0318 05:01:00.276906   21487 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10833,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:01:00.276989   21487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:01:00.282893   21487 out.go:177] * [offline-docker-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:01:00.290967   21487 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:01:00.290975   21487 notify.go:220] Checking for updates...
	I0318 05:01:00.298951   21487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:01:00.301880   21487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:01:00.304994   21487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:01:00.308000   21487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:01:00.310905   21487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:01:00.314312   21487 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:01:00.314369   21487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:01:00.318864   21487 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:01:00.325942   21487 start.go:297] selected driver: qemu2
	I0318 05:01:00.325951   21487 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:01:00.325958   21487 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:01:00.328115   21487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:01:00.330938   21487 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:01:00.333983   21487 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:01:00.334026   21487 cni.go:84] Creating CNI manager for ""
	I0318 05:01:00.334033   21487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:01:00.334038   21487 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:01:00.334078   21487 start.go:340] cluster config:
	{Name:offline-docker-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:01:00.338703   21487 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:01:00.345914   21487 out.go:177] * Starting "offline-docker-969000" primary control-plane node in "offline-docker-969000" cluster
	I0318 05:01:00.349926   21487 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:01:00.349961   21487 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:01:00.349973   21487 cache.go:56] Caching tarball of preloaded images
	I0318 05:01:00.350047   21487 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:01:00.350052   21487 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:01:00.350109   21487 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/offline-docker-969000/config.json ...
	I0318 05:01:00.350120   21487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/offline-docker-969000/config.json: {Name:mk4f654b784a5f278e8718965a0a453a666476d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:01:00.350355   21487 start.go:360] acquireMachinesLock for offline-docker-969000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:00.350389   21487 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "offline-docker-969000"
	I0318 05:01:00.350404   21487 start.go:93] Provisioning new machine with config: &{Name:offline-docker-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:01:00.350447   21487 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:01:00.353984   21487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:01:00.369262   21487 start.go:159] libmachine.API.Create for "offline-docker-969000" (driver="qemu2")
	I0318 05:01:00.369292   21487 client.go:168] LocalClient.Create starting
	I0318 05:01:00.369363   21487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:01:00.369391   21487 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:00.369399   21487 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:00.369446   21487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:01:00.369468   21487 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:00.369475   21487 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:00.369850   21487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:01:00.509428   21487 main.go:141] libmachine: Creating SSH key...
	I0318 05:01:00.555801   21487 main.go:141] libmachine: Creating Disk image...
	I0318 05:01:00.555810   21487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:01:00.556457   21487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:00.569672   21487 main.go:141] libmachine: STDOUT: 
	I0318 05:01:00.569703   21487 main.go:141] libmachine: STDERR: 
	I0318 05:01:00.569755   21487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2 +20000M
	I0318 05:01:00.582026   21487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:01:00.582048   21487 main.go:141] libmachine: STDERR: 
	I0318 05:01:00.582075   21487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:00.582081   21487 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:01:00.582113   21487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:68:c4:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:00.583911   21487 main.go:141] libmachine: STDOUT: 
	I0318 05:01:00.583930   21487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:00.583954   21487 client.go:171] duration metric: took 214.664209ms to LocalClient.Create
	I0318 05:01:02.585935   21487 start.go:128] duration metric: took 2.235548458s to createHost
	I0318 05:01:02.585957   21487 start.go:83] releasing machines lock for "offline-docker-969000", held for 2.2356335s
	W0318 05:01:02.585973   21487 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:02.590335   21487 out.go:177] * Deleting "offline-docker-969000" in qemu2 ...
	W0318 05:01:02.601477   21487 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:02.601490   21487 start.go:728] Will try again in 5 seconds ...
	I0318 05:01:07.603651   21487 start.go:360] acquireMachinesLock for offline-docker-969000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:07.604132   21487 start.go:364] duration metric: took 362.791µs to acquireMachinesLock for "offline-docker-969000"
	I0318 05:01:07.604288   21487 start.go:93] Provisioning new machine with config: &{Name:offline-docker-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:01:07.604538   21487 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:01:07.614241   21487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:01:07.665232   21487 start.go:159] libmachine.API.Create for "offline-docker-969000" (driver="qemu2")
	I0318 05:01:07.665306   21487 client.go:168] LocalClient.Create starting
	I0318 05:01:07.665426   21487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:01:07.665498   21487 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:07.665523   21487 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:07.665589   21487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:01:07.665640   21487 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:07.665655   21487 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:07.666259   21487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:01:07.827559   21487 main.go:141] libmachine: Creating SSH key...
	I0318 05:01:07.956370   21487 main.go:141] libmachine: Creating Disk image...
	I0318 05:01:07.956376   21487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:01:07.956595   21487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:07.969251   21487 main.go:141] libmachine: STDOUT: 
	I0318 05:01:07.969267   21487 main.go:141] libmachine: STDERR: 
	I0318 05:01:07.969317   21487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2 +20000M
	I0318 05:01:07.979822   21487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:01:07.979840   21487 main.go:141] libmachine: STDERR: 
	I0318 05:01:07.979854   21487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:07.979859   21487 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:01:07.979902   21487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:20:b1:1b:2f:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/offline-docker-969000/disk.qcow2
	I0318 05:01:07.981525   21487 main.go:141] libmachine: STDOUT: 
	I0318 05:01:07.981540   21487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:07.981553   21487 client.go:171] duration metric: took 316.25075ms to LocalClient.Create
	I0318 05:01:09.983662   21487 start.go:128] duration metric: took 2.379161875s to createHost
	I0318 05:01:09.983713   21487 start.go:83] releasing machines lock for "offline-docker-969000", held for 2.379632334s
	W0318 05:01:09.984162   21487 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:09.998863   21487 out.go:177] 
	W0318 05:01:10.004993   21487 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:01:10.005044   21487 out.go:239] * 
	* 
	W0318 05:01:10.007533   21487 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:01:10.017836   21487 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-969000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-18 05:01:10.034873 -0700 PDT m=+789.032501168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-969000 -n offline-docker-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-969000 -n offline-docker-969000: exit status 7 (69.048208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-969000
--- FAIL: TestOffline (9.98s)

                                                
                                    
x
+
TestAddons/Setup (10.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-009000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-009000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.234760625s)

                                                
                                                
-- stdout --
	* [addons-009000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-009000" primary control-plane node in "addons-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:49:45.093428   20090 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:49:45.093569   20090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:49:45.093573   20090 out.go:304] Setting ErrFile to fd 2...
	I0318 04:49:45.093580   20090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:49:45.093699   20090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:49:45.094809   20090 out.go:298] Setting JSON to false
	I0318 04:49:45.110794   20090 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10158,"bootTime":1710752427,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:49:45.110870   20090 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:49:45.115813   20090 out.go:177] * [addons-009000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:49:45.122840   20090 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:49:45.126796   20090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:49:45.122896   20090 notify.go:220] Checking for updates...
	I0318 04:49:45.129807   20090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:49:45.136816   20090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:49:45.139800   20090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:49:45.142751   20090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:49:45.146949   20090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:49:45.150807   20090 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:49:45.157755   20090 start.go:297] selected driver: qemu2
	I0318 04:49:45.157760   20090 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:49:45.157765   20090 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:49:45.160081   20090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:49:45.164820   20090 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:49:45.167945   20090 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:49:45.167989   20090 cni.go:84] Creating CNI manager for ""
	I0318 04:49:45.167997   20090 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:49:45.168008   20090 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:49:45.168049   20090 start.go:340] cluster config:
	{Name:addons-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:49:45.172910   20090 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:49:45.180642   20090 out.go:177] * Starting "addons-009000" primary control-plane node in "addons-009000" cluster
	I0318 04:49:45.184763   20090 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:49:45.184782   20090 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:49:45.184796   20090 cache.go:56] Caching tarball of preloaded images
	I0318 04:49:45.184858   20090 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:49:45.184866   20090 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:49:45.185112   20090 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/addons-009000/config.json ...
	I0318 04:49:45.185124   20090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/addons-009000/config.json: {Name:mk17380a5f127c2d1a886886c78e1cb6c7967ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:49:45.185379   20090 start.go:360] acquireMachinesLock for addons-009000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:49:45.185529   20090 start.go:364] duration metric: took 144.458µs to acquireMachinesLock for "addons-009000"
	I0318 04:49:45.185544   20090 start.go:93] Provisioning new machine with config: &{Name:addons-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:49:45.185587   20090 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:49:45.194774   20090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 04:49:45.213990   20090 start.go:159] libmachine.API.Create for "addons-009000" (driver="qemu2")
	I0318 04:49:45.214029   20090 client.go:168] LocalClient.Create starting
	I0318 04:49:45.214183   20090 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:49:45.396118   20090 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:49:45.517034   20090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:49:45.758721   20090 main.go:141] libmachine: Creating SSH key...
	I0318 04:49:45.875870   20090 main.go:141] libmachine: Creating Disk image...
	I0318 04:49:45.875877   20090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:49:45.876156   20090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:45.888456   20090 main.go:141] libmachine: STDOUT: 
	I0318 04:49:45.888491   20090 main.go:141] libmachine: STDERR: 
	I0318 04:49:45.888548   20090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2 +20000M
	I0318 04:49:45.899287   20090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:49:45.899309   20090 main.go:141] libmachine: STDERR: 
	I0318 04:49:45.899325   20090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:45.899331   20090 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:49:45.899359   20090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:5f:61:f1:51:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:45.901120   20090 main.go:141] libmachine: STDOUT: 
	I0318 04:49:45.901142   20090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:49:45.901165   20090 client.go:171] duration metric: took 687.152ms to LocalClient.Create
	I0318 04:49:47.903318   20090 start.go:128] duration metric: took 2.717794958s to createHost
	I0318 04:49:47.903366   20090 start.go:83] releasing machines lock for "addons-009000", held for 2.717913959s
	W0318 04:49:47.903432   20090 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:49:47.915845   20090 out.go:177] * Deleting "addons-009000" in qemu2 ...
	W0318 04:49:47.944876   20090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:49:47.944939   20090 start.go:728] Will try again in 5 seconds ...
	I0318 04:49:52.946934   20090 start.go:360] acquireMachinesLock for addons-009000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:49:52.947401   20090 start.go:364] duration metric: took 324.583µs to acquireMachinesLock for "addons-009000"
	I0318 04:49:52.947538   20090 start.go:93] Provisioning new machine with config: &{Name:addons-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:49:52.947852   20090 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:49:52.957488   20090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 04:49:53.007381   20090 start.go:159] libmachine.API.Create for "addons-009000" (driver="qemu2")
	I0318 04:49:53.007420   20090 client.go:168] LocalClient.Create starting
	I0318 04:49:53.007544   20090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:49:53.007612   20090 main.go:141] libmachine: Decoding PEM data...
	I0318 04:49:53.007633   20090 main.go:141] libmachine: Parsing certificate...
	I0318 04:49:53.007741   20090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:49:53.007786   20090 main.go:141] libmachine: Decoding PEM data...
	I0318 04:49:53.007797   20090 main.go:141] libmachine: Parsing certificate...
	I0318 04:49:53.008321   20090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:49:53.158343   20090 main.go:141] libmachine: Creating SSH key...
	I0318 04:49:53.224708   20090 main.go:141] libmachine: Creating Disk image...
	I0318 04:49:53.224713   20090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:49:53.224933   20090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:53.237206   20090 main.go:141] libmachine: STDOUT: 
	I0318 04:49:53.237232   20090 main.go:141] libmachine: STDERR: 
	I0318 04:49:53.237297   20090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2 +20000M
	I0318 04:49:53.248342   20090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:49:53.248362   20090 main.go:141] libmachine: STDERR: 
	I0318 04:49:53.248374   20090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:53.248377   20090 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:49:53.248410   20090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:58:09:97:c9:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/addons-009000/disk.qcow2
	I0318 04:49:53.250133   20090 main.go:141] libmachine: STDOUT: 
	I0318 04:49:53.250147   20090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:49:53.250162   20090 client.go:171] duration metric: took 242.745291ms to LocalClient.Create
	I0318 04:49:55.252419   20090 start.go:128] duration metric: took 2.304575833s to createHost
	I0318 04:49:55.252493   20090 start.go:83] releasing machines lock for "addons-009000", held for 2.305135041s
	W0318 04:49:55.252806   20090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:49:55.262347   20090 out.go:177] 
	W0318 04:49:55.269342   20090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:49:55.269367   20090 out.go:239] * 
	* 
	W0318 04:49:55.271960   20090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:49:55.282240   20090 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-009000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.24s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.857622542s)

                                                
                                                
-- stdout --
	* [cert-options-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-386000" primary control-plane node in "cert-options-386000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-386000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-386000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.433917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-386000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-386000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-386000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-386000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-386000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-18 05:13:17.484926 -0700 PDT m=+1516.511221001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-386000 -n cert-options-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-386000 -n cert-options-386000: exit status 7 (32.324125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-386000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-386000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (197.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.00106925s)

                                                
                                                
-- stdout --
	* [cert-expiration-110000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-110000" primary control-plane node in "cert-expiration-110000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-110000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-110000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.247409s)

                                                
                                                
-- stdout --
	* [cert-expiration-110000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-110000" primary control-plane node in "cert-expiration-110000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-110000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-110000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-110000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-110000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-110000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-110000" primary control-plane node in "cert-expiration-110000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-110000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-110000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-110000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-18 05:16:09.978697 -0700 PDT m=+1689.010749876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-110000 -n cert-expiration-110000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-110000 -n cert-expiration-110000: exit status 7 (73.764792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-110000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-110000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-110000
--- FAIL: TestCertExpiration (197.43s)

                                                
                                    
x
+
TestDockerFlags (12.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-182000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-182000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.090580083s)

                                                
                                                
-- stdout --
	* [docker-flags-182000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-182000" primary control-plane node in "docker-flags-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:12:55.138015   22113 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:12:55.138132   22113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:55.138135   22113 out.go:304] Setting ErrFile to fd 2...
	I0318 05:12:55.138137   22113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:55.138278   22113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:12:55.139376   22113 out.go:298] Setting JSON to false
	I0318 05:12:55.156068   22113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11548,"bootTime":1710752427,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:12:55.156130   22113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:12:55.162211   22113 out.go:177] * [docker-flags-182000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:12:55.175149   22113 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:12:55.171209   22113 notify.go:220] Checking for updates...
	I0318 05:12:55.183095   22113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:12:55.190091   22113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:12:55.194164   22113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:12:55.197102   22113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:12:55.200145   22113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:12:55.203584   22113 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:12:55.203648   22113 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:12:55.203710   22113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:12:55.208105   22113 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:12:55.215142   22113 start.go:297] selected driver: qemu2
	I0318 05:12:55.215149   22113 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:12:55.215155   22113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:12:55.217300   22113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:12:55.220052   22113 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:12:55.223188   22113 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0318 05:12:55.223243   22113 cni.go:84] Creating CNI manager for ""
	I0318 05:12:55.223250   22113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:12:55.223254   22113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:12:55.223286   22113 start.go:340] cluster config:
	{Name:docker-flags-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:12:55.227563   22113 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:12:55.235116   22113 out.go:177] * Starting "docker-flags-182000" primary control-plane node in "docker-flags-182000" cluster
	I0318 05:12:55.239120   22113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:12:55.239145   22113 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:12:55.239158   22113 cache.go:56] Caching tarball of preloaded images
	I0318 05:12:55.239224   22113 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:12:55.239230   22113 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:12:55.239290   22113 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/docker-flags-182000/config.json ...
	I0318 05:12:55.239301   22113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/docker-flags-182000/config.json: {Name:mkf91d81e66d5aee0521ee804cfe67bb62030a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:12:55.239513   22113 start.go:360] acquireMachinesLock for docker-flags-182000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:12:57.192496   22113 start.go:364] duration metric: took 1.952955917s to acquireMachinesLock for "docker-flags-182000"
	I0318 05:12:57.192641   22113 start.go:93] Provisioning new machine with config: &{Name:docker-flags-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:12:57.192969   22113 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:12:57.200587   22113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:12:57.250474   22113 start.go:159] libmachine.API.Create for "docker-flags-182000" (driver="qemu2")
	I0318 05:12:57.250536   22113 client.go:168] LocalClient.Create starting
	I0318 05:12:57.250680   22113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:12:57.250733   22113 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:57.250751   22113 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:57.250822   22113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:12:57.250864   22113 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:57.250901   22113 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:57.251534   22113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:12:57.402991   22113 main.go:141] libmachine: Creating SSH key...
	I0318 05:12:57.545804   22113 main.go:141] libmachine: Creating Disk image...
	I0318 05:12:57.545814   22113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:12:57.546007   22113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:12:57.558671   22113 main.go:141] libmachine: STDOUT: 
	I0318 05:12:57.558687   22113 main.go:141] libmachine: STDERR: 
	I0318 05:12:57.558740   22113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2 +20000M
	I0318 05:12:57.569274   22113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:12:57.569290   22113 main.go:141] libmachine: STDERR: 
	I0318 05:12:57.569304   22113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:12:57.569311   22113 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:12:57.569337   22113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5c:7c:da:61:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:12:57.571059   22113 main.go:141] libmachine: STDOUT: 
	I0318 05:12:57.571075   22113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:12:57.571093   22113 client.go:171] duration metric: took 320.560459ms to LocalClient.Create
	I0318 05:12:59.573210   22113 start.go:128] duration metric: took 2.380269333s to createHost
	I0318 05:12:59.573329   22113 start.go:83] releasing machines lock for "docker-flags-182000", held for 2.380804583s
	W0318 05:12:59.573417   22113 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:59.591607   22113 out.go:177] * Deleting "docker-flags-182000" in qemu2 ...
	W0318 05:12:59.622100   22113 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:59.622142   22113 start.go:728] Will try again in 5 seconds ...
	I0318 05:13:04.622353   22113 start.go:360] acquireMachinesLock for docker-flags-182000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:04.690378   22113 start.go:364] duration metric: took 67.893625ms to acquireMachinesLock for "docker-flags-182000"
	I0318 05:13:04.690553   22113 start.go:93] Provisioning new machine with config: &{Name:docker-flags-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:04.690778   22113 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:04.700132   22113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:13:04.750184   22113 start.go:159] libmachine.API.Create for "docker-flags-182000" (driver="qemu2")
	I0318 05:13:04.750262   22113 client.go:168] LocalClient.Create starting
	I0318 05:13:04.750446   22113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:04.750494   22113 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:04.750511   22113 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:04.750569   22113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:04.750600   22113 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:04.750611   22113 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:04.751039   22113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:04.917880   22113 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:05.123164   22113 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:05.123171   22113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:05.123377   22113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:13:05.136351   22113 main.go:141] libmachine: STDOUT: 
	I0318 05:13:05.136372   22113 main.go:141] libmachine: STDERR: 
	I0318 05:13:05.136464   22113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2 +20000M
	I0318 05:13:05.147488   22113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:05.147505   22113 main.go:141] libmachine: STDERR: 
	I0318 05:13:05.147521   22113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:13:05.147525   22113 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:05.147567   22113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:30:e2:81:48:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/docker-flags-182000/disk.qcow2
	I0318 05:13:05.149318   22113 main.go:141] libmachine: STDOUT: 
	I0318 05:13:05.149333   22113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:05.149354   22113 client.go:171] duration metric: took 399.08225ms to LocalClient.Create
	I0318 05:13:07.151521   22113 start.go:128] duration metric: took 2.460779083s to createHost
	I0318 05:13:07.151623   22113 start.go:83] releasing machines lock for "docker-flags-182000", held for 2.461269792s
	W0318 05:13:07.152056   22113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:07.165681   22113 out.go:177] 
	W0318 05:13:07.169822   22113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:13:07.169864   22113 out.go:239] * 
	* 
	W0318 05:13:07.172247   22113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:13:07.182787   22113 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-182000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-182000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-182000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.949042ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-182000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-182000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-182000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-182000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-182000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-182000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-182000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-182000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.943042ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-182000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-182000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-182000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-182000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-182000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-18 05:13:07.329252 -0700 PDT m=+1506.355208126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-182000 -n docker-flags-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-182000 -n docker-flags-182000: exit status 7 (32.483083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-182000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-182000
--- FAIL: TestDockerFlags (12.36s)

                                                
                                    
x
+
TestForceSystemdFlag (9.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-564000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-564000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.763886125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-564000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-564000" primary control-plane node in "force-systemd-flag-564000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:12:22.250306   21961 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:12:22.250439   21961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:22.250442   21961 out.go:304] Setting ErrFile to fd 2...
	I0318 05:12:22.250445   21961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:22.250570   21961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:12:22.251619   21961 out.go:298] Setting JSON to false
	I0318 05:12:22.267732   21961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11515,"bootTime":1710752427,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:12:22.267791   21961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:12:22.271456   21961 out.go:177] * [force-systemd-flag-564000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:12:22.279438   21961 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:12:22.283397   21961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:12:22.279485   21961 notify.go:220] Checking for updates...
	I0318 05:12:22.289346   21961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:12:22.292424   21961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:12:22.295424   21961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:12:22.298372   21961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:12:22.301692   21961 config.go:182] Loaded profile config "NoKubernetes-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:12:22.301772   21961 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:12:22.301814   21961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:12:22.306465   21961 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:12:22.313340   21961 start.go:297] selected driver: qemu2
	I0318 05:12:22.313349   21961 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:12:22.313355   21961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:12:22.315659   21961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:12:22.318362   21961 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:12:22.321409   21961 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 05:12:22.321438   21961 cni.go:84] Creating CNI manager for ""
	I0318 05:12:22.321446   21961 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:12:22.321450   21961 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:12:22.321486   21961 start.go:340] cluster config:
	{Name:force-systemd-flag-564000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:12:22.326021   21961 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:12:22.333435   21961 out.go:177] * Starting "force-systemd-flag-564000" primary control-plane node in "force-systemd-flag-564000" cluster
	I0318 05:12:22.337381   21961 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:12:22.337398   21961 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:12:22.337407   21961 cache.go:56] Caching tarball of preloaded images
	I0318 05:12:22.337466   21961 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:12:22.337473   21961 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:12:22.337544   21961 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/force-systemd-flag-564000/config.json ...
	I0318 05:12:22.337556   21961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/force-systemd-flag-564000/config.json: {Name:mke1fda050bda346b0598fb252f05a3b433e41b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:12:22.337784   21961 start.go:360] acquireMachinesLock for force-systemd-flag-564000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:12:22.337820   21961 start.go:364] duration metric: took 28µs to acquireMachinesLock for "force-systemd-flag-564000"
	I0318 05:12:22.337834   21961 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:12:22.337862   21961 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:12:22.345383   21961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:12:22.362939   21961 start.go:159] libmachine.API.Create for "force-systemd-flag-564000" (driver="qemu2")
	I0318 05:12:22.362966   21961 client.go:168] LocalClient.Create starting
	I0318 05:12:22.363037   21961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:12:22.363070   21961 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:22.363086   21961 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:22.363135   21961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:12:22.363157   21961 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:22.363163   21961 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:22.363540   21961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:12:22.511410   21961 main.go:141] libmachine: Creating SSH key...
	I0318 05:12:22.571259   21961 main.go:141] libmachine: Creating Disk image...
	I0318 05:12:22.571264   21961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:12:22.571444   21961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:22.583643   21961 main.go:141] libmachine: STDOUT: 
	I0318 05:12:22.583677   21961 main.go:141] libmachine: STDERR: 
	I0318 05:12:22.583734   21961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2 +20000M
	I0318 05:12:22.594493   21961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:12:22.594511   21961 main.go:141] libmachine: STDERR: 
	I0318 05:12:22.594526   21961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:22.594533   21961 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:12:22.594561   21961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:2d:bd:d4:75:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:22.596411   21961 main.go:141] libmachine: STDOUT: 
	I0318 05:12:22.596428   21961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:12:22.596451   21961 client.go:171] duration metric: took 233.488333ms to LocalClient.Create
	I0318 05:12:24.598579   21961 start.go:128] duration metric: took 2.260772s to createHost
	I0318 05:12:24.598664   21961 start.go:83] releasing machines lock for "force-systemd-flag-564000", held for 2.26091125s
	W0318 05:12:24.598794   21961 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:24.615881   21961 out.go:177] * Deleting "force-systemd-flag-564000" in qemu2 ...
	W0318 05:12:24.640101   21961 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:24.640137   21961 start.go:728] Will try again in 5 seconds ...
	I0318 05:12:29.642023   21961 start.go:360] acquireMachinesLock for force-systemd-flag-564000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:12:29.642112   21961 start.go:364] duration metric: took 72.333µs to acquireMachinesLock for "force-systemd-flag-564000"
	I0318 05:12:29.642133   21961 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:12:29.642172   21961 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:12:29.653500   21961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:12:29.668105   21961 start.go:159] libmachine.API.Create for "force-systemd-flag-564000" (driver="qemu2")
	I0318 05:12:29.668143   21961 client.go:168] LocalClient.Create starting
	I0318 05:12:29.668215   21961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:12:29.668243   21961 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:29.668252   21961 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:29.668294   21961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:12:29.668308   21961 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:29.668317   21961 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:29.669136   21961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:12:29.855047   21961 main.go:141] libmachine: Creating SSH key...
	I0318 05:12:29.893502   21961 main.go:141] libmachine: Creating Disk image...
	I0318 05:12:29.893509   21961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:12:29.893696   21961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:29.905824   21961 main.go:141] libmachine: STDOUT: 
	I0318 05:12:29.905924   21961 main.go:141] libmachine: STDERR: 
	I0318 05:12:29.905989   21961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2 +20000M
	I0318 05:12:29.916877   21961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:12:29.916898   21961 main.go:141] libmachine: STDERR: 
	I0318 05:12:29.916918   21961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:29.916924   21961 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:12:29.916977   21961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:5f:5d:47:12:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-flag-564000/disk.qcow2
	I0318 05:12:29.918826   21961 main.go:141] libmachine: STDOUT: 
	I0318 05:12:29.918850   21961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:12:29.918868   21961 client.go:171] duration metric: took 250.7235ms to LocalClient.Create
	I0318 05:12:31.920996   21961 start.go:128] duration metric: took 2.278878875s to createHost
	I0318 05:12:31.921087   21961 start.go:83] releasing machines lock for "force-systemd-flag-564000", held for 2.279041625s
	W0318 05:12:31.921450   21961 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:31.930204   21961 out.go:177] 
	W0318 05:12:31.949008   21961 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:12:31.949033   21961 out.go:239] * 
	* 
	W0318 05:12:31.951815   21961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:12:31.964101   21961 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-564000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-564000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-564000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.257ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-564000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-564000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-564000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-18 05:12:32.062526 -0700 PDT m=+1471.087298251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-564000 -n force-systemd-flag-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-564000 -n force-systemd-flag-564000: exit status 7 (35.276833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-564000
--- FAIL: TestForceSystemdFlag (9.98s)

                                                
                                    
x
+
TestForceSystemdEnv (10.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-426000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-426000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.96395425s)

                                                
                                                
-- stdout --
	* [force-systemd-env-426000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-426000" primary control-plane node in "force-systemd-env-426000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-426000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:12:44.935181   22066 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:12:44.935317   22066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:44.935324   22066 out.go:304] Setting ErrFile to fd 2...
	I0318 05:12:44.935326   22066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:12:44.935465   22066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:12:44.936485   22066 out.go:298] Setting JSON to false
	I0318 05:12:44.952554   22066 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11537,"bootTime":1710752427,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:12:44.952623   22066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:12:44.958423   22066 out.go:177] * [force-systemd-env-426000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:12:44.965395   22066 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:12:44.970395   22066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:12:44.965441   22066 notify.go:220] Checking for updates...
	I0318 05:12:44.976350   22066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:12:44.979424   22066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:12:44.982416   22066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:12:44.983790   22066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0318 05:12:44.986714   22066 config.go:182] Loaded profile config "NoKubernetes-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0318 05:12:44.986783   22066 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:12:44.986836   22066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:12:44.991360   22066 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:12:44.996398   22066 start.go:297] selected driver: qemu2
	I0318 05:12:44.996404   22066 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:12:44.996409   22066 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:12:44.998666   22066 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:12:45.002369   22066 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:12:45.003843   22066 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 05:12:45.003879   22066 cni.go:84] Creating CNI manager for ""
	I0318 05:12:45.003886   22066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:12:45.003890   22066 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:12:45.003920   22066 start.go:340] cluster config:
	{Name:force-systemd-env-426000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-426000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:12:45.008350   22066 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:12:45.015436   22066 out.go:177] * Starting "force-systemd-env-426000" primary control-plane node in "force-systemd-env-426000" cluster
	I0318 05:12:45.019341   22066 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:12:45.019356   22066 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:12:45.019365   22066 cache.go:56] Caching tarball of preloaded images
	I0318 05:12:45.019436   22066 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:12:45.019442   22066 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:12:45.019494   22066 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/force-systemd-env-426000/config.json ...
	I0318 05:12:45.019504   22066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/force-systemd-env-426000/config.json: {Name:mk4f68bf3b14efe062ad0e711b575ee91412e506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:12:45.019723   22066 start.go:360] acquireMachinesLock for force-systemd-env-426000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:12:45.019756   22066 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "force-systemd-env-426000"
	I0318 05:12:45.019770   22066 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-426000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-426000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:12:45.019809   22066 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:12:45.027377   22066 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:12:45.045043   22066 start.go:159] libmachine.API.Create for "force-systemd-env-426000" (driver="qemu2")
	I0318 05:12:45.045071   22066 client.go:168] LocalClient.Create starting
	I0318 05:12:45.045134   22066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:12:45.045165   22066 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:45.045176   22066 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:45.045222   22066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:12:45.045244   22066 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:45.045251   22066 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:45.045614   22066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:12:45.213237   22066 main.go:141] libmachine: Creating SSH key...
	I0318 05:12:45.356774   22066 main.go:141] libmachine: Creating Disk image...
	I0318 05:12:45.356782   22066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:12:45.356935   22066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:45.371343   22066 main.go:141] libmachine: STDOUT: 
	I0318 05:12:45.371363   22066 main.go:141] libmachine: STDERR: 
	I0318 05:12:45.371413   22066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2 +20000M
	I0318 05:12:45.382179   22066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:12:45.382197   22066 main.go:141] libmachine: STDERR: 
	I0318 05:12:45.382211   22066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:45.382215   22066 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:12:45.382252   22066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:2a:e2:86:ef:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:45.383945   22066 main.go:141] libmachine: STDOUT: 
	I0318 05:12:45.383961   22066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:12:45.383979   22066 client.go:171] duration metric: took 338.913667ms to LocalClient.Create
	I0318 05:12:47.386140   22066 start.go:128] duration metric: took 2.366384875s to createHost
	I0318 05:12:47.386242   22066 start.go:83] releasing machines lock for "force-systemd-env-426000", held for 2.366555583s
	W0318 05:12:47.386400   22066 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:47.402908   22066 out.go:177] * Deleting "force-systemd-env-426000" in qemu2 ...
	W0318 05:12:47.433647   22066 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:47.433682   22066 start.go:728] Will try again in 5 seconds ...
	I0318 05:12:52.435700   22066 start.go:360] acquireMachinesLock for force-systemd-env-426000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:12:52.446591   22066 start.go:364] duration metric: took 10.795125ms to acquireMachinesLock for "force-systemd-env-426000"
	I0318 05:12:52.446650   22066 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-426000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-426000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:12:52.446893   22066 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:12:52.459052   22066 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 05:12:52.506443   22066 start.go:159] libmachine.API.Create for "force-systemd-env-426000" (driver="qemu2")
	I0318 05:12:52.506497   22066 client.go:168] LocalClient.Create starting
	I0318 05:12:52.506611   22066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:12:52.506677   22066 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:52.506693   22066 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:52.506755   22066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:12:52.506797   22066 main.go:141] libmachine: Decoding PEM data...
	I0318 05:12:52.506808   22066 main.go:141] libmachine: Parsing certificate...
	I0318 05:12:52.507317   22066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:12:52.681742   22066 main.go:141] libmachine: Creating SSH key...
	I0318 05:12:52.796381   22066 main.go:141] libmachine: Creating Disk image...
	I0318 05:12:52.796395   22066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:12:52.796567   22066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:52.815224   22066 main.go:141] libmachine: STDOUT: 
	I0318 05:12:52.815245   22066 main.go:141] libmachine: STDERR: 
	I0318 05:12:52.815301   22066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2 +20000M
	I0318 05:12:52.827113   22066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:12:52.827129   22066 main.go:141] libmachine: STDERR: 
	I0318 05:12:52.827145   22066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:52.827152   22066 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:12:52.827192   22066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:27:87:68:eb:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/force-systemd-env-426000/disk.qcow2
	I0318 05:12:52.828871   22066 main.go:141] libmachine: STDOUT: 
	I0318 05:12:52.828887   22066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:12:52.828902   22066 client.go:171] duration metric: took 322.403666ms to LocalClient.Create
	I0318 05:12:54.831018   22066 start.go:128] duration metric: took 2.384170584s to createHost
	I0318 05:12:54.831084   22066 start.go:83] releasing machines lock for "force-systemd-env-426000", held for 2.38454675s
	W0318 05:12:54.831460   22066 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-426000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-426000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:12:54.844219   22066 out.go:177] 
	W0318 05:12:54.849274   22066 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:12:54.849313   22066 out.go:239] * 
	* 
	W0318 05:12:54.851852   22066 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:12:54.862082   22066 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-426000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-426000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-426000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.122167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-426000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-426000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-426000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-18 05:12:54.946417 -0700 PDT m=+1493.971958709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-426000 -n force-systemd-env-426000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-426000 -n force-systemd-env-426000: exit status 7 (37.3775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-426000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-426000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-426000
--- FAIL: TestForceSystemdEnv (10.20s)

                                                
                                    
x
+
TestErrorSpam/setup (9.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-701000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-701000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 --driver=qemu2 : exit status 80 (9.853986083s)

                                                
                                                
-- stdout --
	* [nospam-701000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-701000" primary control-plane node in "nospam-701000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-701000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-701000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-701000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-701000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18427
- KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-701000" primary control-plane node in "nospam-701000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-701000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.86s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-681000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.82063975s)

                                                
                                                
-- stdout --
	* [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-681000" primary control-plane node in "functional-681000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-681000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-681000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18427
- KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-681000" primary control-plane node in "functional-681000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-681000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:54122 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (73.039917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.90s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-681000 --alsologtostderr -v=8: exit status 80 (5.199349708s)

                                                
                                                
-- stdout --
	* [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-681000" primary control-plane node in "functional-681000" cluster
	* Restarting existing qemu2 VM for "functional-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:50:24.434137   20229 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:50:24.434260   20229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:50:24.434263   20229 out.go:304] Setting ErrFile to fd 2...
	I0318 04:50:24.434265   20229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:50:24.434389   20229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:50:24.435373   20229 out.go:298] Setting JSON to false
	I0318 04:50:24.451288   20229 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10197,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:50:24.451354   20229 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:50:24.455448   20229 out.go:177] * [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:50:24.461281   20229 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:50:24.461348   20229 notify.go:220] Checking for updates...
	I0318 04:50:24.469316   20229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:50:24.477118   20229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:50:24.480314   20229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:50:24.488156   20229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:50:24.492315   20229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:50:24.495682   20229 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:50:24.495746   20229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:50:24.499186   20229 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:50:24.506314   20229 start.go:297] selected driver: qemu2
	I0318 04:50:24.506322   20229 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:50:24.506378   20229 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:50:24.508768   20229 cni.go:84] Creating CNI manager for ""
	I0318 04:50:24.508789   20229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:50:24.508832   20229 start.go:340] cluster config:
	{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:50:24.513484   20229 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:50:24.521276   20229 out.go:177] * Starting "functional-681000" primary control-plane node in "functional-681000" cluster
	I0318 04:50:24.525318   20229 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:50:24.525337   20229 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:50:24.525347   20229 cache.go:56] Caching tarball of preloaded images
	I0318 04:50:24.525404   20229 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:50:24.525409   20229 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:50:24.525469   20229 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/functional-681000/config.json ...
	I0318 04:50:24.525946   20229 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:50:24.525972   20229 start.go:364] duration metric: took 20.125µs to acquireMachinesLock for "functional-681000"
	I0318 04:50:24.525981   20229 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:50:24.525987   20229 fix.go:54] fixHost starting: 
	I0318 04:50:24.526106   20229 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
	W0318 04:50:24.526114   20229 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:50:24.529286   20229 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
	I0318 04:50:24.537253   20229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
	I0318 04:50:24.539313   20229 main.go:141] libmachine: STDOUT: 
	I0318 04:50:24.539334   20229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:50:24.539364   20229 fix.go:56] duration metric: took 13.37725ms for fixHost
	I0318 04:50:24.539370   20229 start.go:83] releasing machines lock for "functional-681000", held for 13.394125ms
	W0318 04:50:24.539378   20229 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:50:24.539411   20229 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:50:24.539416   20229 start.go:728] Will try again in 5 seconds ...
	I0318 04:50:29.541417   20229 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:50:29.541697   20229 start.go:364] duration metric: took 221.209µs to acquireMachinesLock for "functional-681000"
	I0318 04:50:29.541834   20229 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:50:29.541852   20229 fix.go:54] fixHost starting: 
	I0318 04:50:29.542518   20229 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
	W0318 04:50:29.542550   20229 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:50:29.548010   20229 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
	I0318 04:50:29.555050   20229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
	I0318 04:50:29.564642   20229 main.go:141] libmachine: STDOUT: 
	I0318 04:50:29.564714   20229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:50:29.564832   20229 fix.go:56] duration metric: took 22.9345ms for fixHost
	I0318 04:50:29.564853   20229 start.go:83] releasing machines lock for "functional-681000", held for 23.128458ms
	W0318 04:50:29.565000   20229 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:50:29.572938   20229 out.go:177] 
	W0318 04:50:29.576927   20229 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:50:29.576952   20229 out.go:239] * 
	* 
	W0318 04:50:29.579755   20229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:50:29.586904   20229 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-681000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.2013475s for "functional-681000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (68.222167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.131625ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-681000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.81625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-681000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-681000 get po -A: exit status 1 (26.785916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-681000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-681000\n"*: args "kubectl --context functional-681000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-681000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (32.114917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl images: exit status 83 (43.010041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (43.766ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-681000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.843125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.855042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-681000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 kubectl -- --context functional-681000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 kubectl -- --context functional-681000 get pods: exit status 1 (523.733166ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-681000
	* no server found for cluster "functional-681000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-681000 kubectl -- --context functional-681000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (34.6595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-681000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-681000 get pods: exit status 1 (681.67675ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-681000
	* no server found for cluster "functional-681000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-681000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.12625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-681000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.19134125s)

                                                
                                                
-- stdout --
	* [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-681000" primary control-plane node in "functional-681000" cluster
	* Restarting existing qemu2 VM for "functional-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-681000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-681000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.191882834s for "functional-681000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (66.78375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-681000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-681000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.587541ms)

                                                
                                                
** stderr ** 
	error: context "functional-681000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-681000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.08475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 logs: exit status 83 (82.013292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-305000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| start   | -o=json --download-only                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-573000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| start   | -o=json --download-only                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | -p download-only-945000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| start   | --download-only -p                                                       | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | binary-mirror-892000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:54091                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-892000                                                  | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| addons  | enable dashboard -p                                                      | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | addons-009000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | addons-009000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-009000 --wait=true                                             | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-009000                                                         | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| start   | -p nospam-701000 -n=1 --memory=2250 --wait=false                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-701000                                                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
	| cache   | functional-681000 cache delete                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	| ssh     | functional-681000 ssh sudo                                               | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-681000                                                        | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-681000 cache reload                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-681000 kubectl --                                             | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | --context functional-681000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:50:38
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:50:38.737810   20311 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:50:38.737978   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:50:38.737982   20311 out.go:304] Setting ErrFile to fd 2...
	I0318 04:50:38.737984   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:50:38.738277   20311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:50:38.739439   20311 out.go:298] Setting JSON to false
	I0318 04:50:38.755627   20311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10211,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:50:38.755688   20311 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:50:38.758786   20311 out.go:177] * [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:50:38.767308   20311 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:50:38.770163   20311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:50:38.767360   20311 notify.go:220] Checking for updates...
	I0318 04:50:38.775695   20311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:50:38.780272   20311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:50:38.783201   20311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:50:38.784732   20311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:50:38.789535   20311 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:50:38.789582   20311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:50:38.794222   20311 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:50:38.801194   20311 start.go:297] selected driver: qemu2
	I0318 04:50:38.801198   20311 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:50:38.801258   20311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:50:38.803516   20311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:50:38.803556   20311 cni.go:84] Creating CNI manager for ""
	I0318 04:50:38.803561   20311 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:50:38.803610   20311 start.go:340] cluster config:
	{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:50:38.808063   20311 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:50:38.816207   20311 out.go:177] * Starting "functional-681000" primary control-plane node in "functional-681000" cluster
	I0318 04:50:38.820183   20311 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:50:38.820194   20311 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:50:38.820201   20311 cache.go:56] Caching tarball of preloaded images
	I0318 04:50:38.820255   20311 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:50:38.820269   20311 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:50:38.820328   20311 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/functional-681000/config.json ...
	I0318 04:50:38.820802   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:50:38.820833   20311 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "functional-681000"
	I0318 04:50:38.820841   20311 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:50:38.820844   20311 fix.go:54] fixHost starting: 
	I0318 04:50:38.820960   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
	W0318 04:50:38.820966   20311 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:50:38.828189   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
	I0318 04:50:38.832288   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
	I0318 04:50:38.834385   20311 main.go:141] libmachine: STDOUT: 
	I0318 04:50:38.834402   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:50:38.834429   20311 fix.go:56] duration metric: took 13.585708ms for fixHost
	I0318 04:50:38.834432   20311 start.go:83] releasing machines lock for "functional-681000", held for 13.596292ms
	W0318 04:50:38.834439   20311 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:50:38.834465   20311 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:50:38.834470   20311 start.go:728] Will try again in 5 seconds ...
	I0318 04:50:43.834648   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:50:43.835005   20311 start.go:364] duration metric: took 304.291µs to acquireMachinesLock for "functional-681000"
	I0318 04:50:43.835134   20311 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:50:43.835149   20311 fix.go:54] fixHost starting: 
	I0318 04:50:43.835796   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
	W0318 04:50:43.835815   20311 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:50:43.841272   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
	I0318 04:50:43.856392   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
	I0318 04:50:43.866021   20311 main.go:141] libmachine: STDOUT: 
	I0318 04:50:43.866099   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:50:43.866190   20311 fix.go:56] duration metric: took 31.042084ms for fixHost
	I0318 04:50:43.866204   20311 start.go:83] releasing machines lock for "functional-681000", held for 31.183042ms
	W0318 04:50:43.866428   20311 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:50:43.872264   20311 out.go:177] 
	W0318 04:50:43.876293   20311 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:50:43.876397   20311 out.go:239] * 
	W0318 04:50:43.878802   20311 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:50:43.886093   20311 out.go:177] 
	
	
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-681000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
|         | -p download-only-305000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
| start   | -o=json --download-only                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
|         | -p download-only-573000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | -o=json --download-only                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | -p download-only-945000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | binary-mirror-892000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:54091                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-892000                                                  | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| addons  | enable dashboard -p                                                      | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | addons-009000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | addons-009000                                                            |                      |         |         |                     |                     |
| start   | -p addons-009000 --wait=true                                             | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-009000                                                         | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | -p nospam-701000 -n=1 --memory=2250 --wait=false                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-701000                                                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
| cache   | functional-681000 cache delete                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| ssh     | functional-681000 ssh sudo                                               | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-681000                                                        | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-681000 cache reload                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-681000 kubectl --                                             | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --context functional-681000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 04:50:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 04:50:38.737810   20311 out.go:291] Setting OutFile to fd 1 ...
I0318 04:50:38.737978   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:38.737982   20311 out.go:304] Setting ErrFile to fd 2...
I0318 04:50:38.737984   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:38.738277   20311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:50:38.739439   20311 out.go:298] Setting JSON to false
I0318 04:50:38.755627   20311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10211,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 04:50:38.755688   20311 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 04:50:38.758786   20311 out.go:177] * [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 04:50:38.767308   20311 out.go:177]   - MINIKUBE_LOCATION=18427
I0318 04:50:38.770163   20311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
I0318 04:50:38.767360   20311 notify.go:220] Checking for updates...
I0318 04:50:38.775695   20311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 04:50:38.780272   20311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 04:50:38.783201   20311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
I0318 04:50:38.784732   20311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 04:50:38.789535   20311 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:50:38.789582   20311 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 04:50:38.794222   20311 out.go:177] * Using the qemu2 driver based on existing profile
I0318 04:50:38.801194   20311 start.go:297] selected driver: qemu2
I0318 04:50:38.801198   20311 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:50:38.801258   20311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 04:50:38.803516   20311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 04:50:38.803556   20311 cni.go:84] Creating CNI manager for ""
I0318 04:50:38.803561   20311 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 04:50:38.803610   20311 start.go:340] cluster config:
{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:50:38.808063   20311 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 04:50:38.816207   20311 out.go:177] * Starting "functional-681000" primary control-plane node in "functional-681000" cluster
I0318 04:50:38.820183   20311 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 04:50:38.820194   20311 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 04:50:38.820201   20311 cache.go:56] Caching tarball of preloaded images
I0318 04:50:38.820255   20311 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 04:50:38.820269   20311 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 04:50:38.820328   20311 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/functional-681000/config.json ...
I0318 04:50:38.820802   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:50:38.820833   20311 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "functional-681000"
I0318 04:50:38.820841   20311 start.go:96] Skipping create...Using existing machine configuration
I0318 04:50:38.820844   20311 fix.go:54] fixHost starting: 
I0318 04:50:38.820960   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
W0318 04:50:38.820966   20311 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:50:38.828189   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
I0318 04:50:38.832288   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
I0318 04:50:38.834385   20311 main.go:141] libmachine: STDOUT: 
I0318 04:50:38.834402   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:50:38.834429   20311 fix.go:56] duration metric: took 13.585708ms for fixHost
I0318 04:50:38.834432   20311 start.go:83] releasing machines lock for "functional-681000", held for 13.596292ms
W0318 04:50:38.834439   20311 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:50:38.834465   20311 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:50:38.834470   20311 start.go:728] Will try again in 5 seconds ...
I0318 04:50:43.834648   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:50:43.835005   20311 start.go:364] duration metric: took 304.291µs to acquireMachinesLock for "functional-681000"
I0318 04:50:43.835134   20311 start.go:96] Skipping create...Using existing machine configuration
I0318 04:50:43.835149   20311 fix.go:54] fixHost starting: 
I0318 04:50:43.835796   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
W0318 04:50:43.835815   20311 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:50:43.841272   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
I0318 04:50:43.856392   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
I0318 04:50:43.866021   20311 main.go:141] libmachine: STDOUT: 
I0318 04:50:43.866099   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:50:43.866190   20311 fix.go:56] duration metric: took 31.042084ms for fixHost
I0318 04:50:43.866204   20311 start.go:83] releasing machines lock for "functional-681000", held for 31.183042ms
W0318 04:50:43.866428   20311 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:50:43.872264   20311 out.go:177] 
W0318 04:50:43.876293   20311 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:50:43.876397   20311 out.go:239] * 
W0318 04:50:43.878802   20311 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:50:43.886093   20311 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3847099944/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
|         | -p download-only-305000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
| start   | -o=json --download-only                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
|         | -p download-only-573000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | -o=json --download-only                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | -p download-only-945000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-305000                                                  | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-573000                                                  | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| delete  | -p download-only-945000                                                  | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | binary-mirror-892000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:54091                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-892000                                                  | binary-mirror-892000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| addons  | enable dashboard -p                                                      | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | addons-009000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | addons-009000                                                            |                      |         |         |                     |                     |
| start   | -p addons-009000 --wait=true                                             | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-009000                                                         | addons-009000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
| start   | -p nospam-701000 -n=1 --memory=2250 --wait=false                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-701000 --log_dir                                                  | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-701000                                                         | nospam-701000        | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-681000 cache add                                              | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
| cache   | functional-681000 cache delete                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | minikube-local-cache-test:functional-681000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| ssh     | functional-681000 ssh sudo                                               | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-681000                                                        | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-681000 cache reload                                           | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
| ssh     | functional-681000 ssh                                                    | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT | 18 Mar 24 04:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-681000 kubectl --                                             | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --context functional-681000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-681000                                                     | functional-681000    | jenkins | v1.32.0 | 18 Mar 24 04:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 04:50:38
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 04:50:38.737810   20311 out.go:291] Setting OutFile to fd 1 ...
I0318 04:50:38.737978   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:38.737982   20311 out.go:304] Setting ErrFile to fd 2...
I0318 04:50:38.737984   20311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:38.738277   20311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:50:38.739439   20311 out.go:298] Setting JSON to false
I0318 04:50:38.755627   20311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10211,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 04:50:38.755688   20311 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 04:50:38.758786   20311 out.go:177] * [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 04:50:38.767308   20311 out.go:177]   - MINIKUBE_LOCATION=18427
I0318 04:50:38.770163   20311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
I0318 04:50:38.767360   20311 notify.go:220] Checking for updates...
I0318 04:50:38.775695   20311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 04:50:38.780272   20311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 04:50:38.783201   20311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
I0318 04:50:38.784732   20311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 04:50:38.789535   20311 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:50:38.789582   20311 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 04:50:38.794222   20311 out.go:177] * Using the qemu2 driver based on existing profile
I0318 04:50:38.801194   20311 start.go:297] selected driver: qemu2
I0318 04:50:38.801198   20311 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:50:38.801258   20311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 04:50:38.803516   20311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 04:50:38.803556   20311 cni.go:84] Creating CNI manager for ""
I0318 04:50:38.803561   20311 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 04:50:38.803610   20311 start.go:340] cluster config:
{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:50:38.808063   20311 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 04:50:38.816207   20311 out.go:177] * Starting "functional-681000" primary control-plane node in "functional-681000" cluster
I0318 04:50:38.820183   20311 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 04:50:38.820194   20311 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 04:50:38.820201   20311 cache.go:56] Caching tarball of preloaded images
I0318 04:50:38.820255   20311 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 04:50:38.820269   20311 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 04:50:38.820328   20311 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/functional-681000/config.json ...
I0318 04:50:38.820802   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:50:38.820833   20311 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "functional-681000"
I0318 04:50:38.820841   20311 start.go:96] Skipping create...Using existing machine configuration
I0318 04:50:38.820844   20311 fix.go:54] fixHost starting: 
I0318 04:50:38.820960   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
W0318 04:50:38.820966   20311 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:50:38.828189   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
I0318 04:50:38.832288   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
I0318 04:50:38.834385   20311 main.go:141] libmachine: STDOUT: 
I0318 04:50:38.834402   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:50:38.834429   20311 fix.go:56] duration metric: took 13.585708ms for fixHost
I0318 04:50:38.834432   20311 start.go:83] releasing machines lock for "functional-681000", held for 13.596292ms
W0318 04:50:38.834439   20311 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:50:38.834465   20311 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:50:38.834470   20311 start.go:728] Will try again in 5 seconds ...
I0318 04:50:43.834648   20311 start.go:360] acquireMachinesLock for functional-681000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:50:43.835005   20311 start.go:364] duration metric: took 304.291µs to acquireMachinesLock for "functional-681000"
I0318 04:50:43.835134   20311 start.go:96] Skipping create...Using existing machine configuration
I0318 04:50:43.835149   20311 fix.go:54] fixHost starting: 
I0318 04:50:43.835796   20311 fix.go:112] recreateIfNeeded on functional-681000: state=Stopped err=<nil>
W0318 04:50:43.835815   20311 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:50:43.841272   20311 out.go:177] * Restarting existing qemu2 VM for "functional-681000" ...
I0318 04:50:43.856392   20311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0f:42:4b:60:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/functional-681000/disk.qcow2
I0318 04:50:43.866021   20311 main.go:141] libmachine: STDOUT: 
I0318 04:50:43.866099   20311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:50:43.866190   20311 fix.go:56] duration metric: took 31.042084ms for fixHost
I0318 04:50:43.866204   20311 start.go:83] releasing machines lock for "functional-681000", held for 31.183042ms
W0318 04:50:43.866428   20311 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-681000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:50:43.872264   20311 out.go:177] 
W0318 04:50:43.876293   20311 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:50:43.876397   20311 out.go:239] * 
W0318 04:50:43.878802   20311 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:50:43.886093   20311 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-681000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-681000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.671875ms)

                                                
                                                
** stderr ** 
	error: context "functional-681000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-681000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-681000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-681000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-681000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-681000 --alsologtostderr -v=1] stderr:
I0318 04:51:37.756180   20653 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:37.756543   20653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:37.756547   20653 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:37.756549   20653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:37.756729   20653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:37.756980   20653 mustload.go:65] Loading cluster: functional-681000
I0318 04:51:37.757163   20653 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:37.761277   20653 out.go:177] * The control-plane node functional-681000 host is not running: state=Stopped
I0318 04:51:37.764191   20653 out.go:177]   To start a cluster, run: "minikube start -p functional-681000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (43.934625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 status: exit status 7 (31.559291ms)

                                                
                                                
-- stdout --
	functional-681000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-681000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.989875ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-681000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 status -o json: exit status 7 (31.959708ms)

                                                
                                                
-- stdout --
	{"Name":"functional-681000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-681000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (32.486583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-681000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-681000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.795083ms)

                                                
                                                
** stderr ** 
	error: context "functional-681000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-681000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-681000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-681000 describe po hello-node-connect: exit status 1 (26.840416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-681000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-681000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-681000 logs -l app=hello-node-connect: exit status 1 (26.541667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-681000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-681000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-681000 describe svc hello-node-connect: exit status 1 (26.441375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-681000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (32.0695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-681000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (32.439209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "echo hello": exit status 83 (44.759458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n"*. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "cat /etc/hostname": exit status 83 (44.813792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-681000"- but got *"* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n"*. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (32.059583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (58.596625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.911417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-681000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-681000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cp functional-681000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1337600083/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 cp functional-681000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1337600083/001/cp-test.txt: exit status 83 (52.384292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 cp functional-681000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1337600083/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.871916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1337600083/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.7645ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (42.928416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-681000 ssh -n functional-681000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-681000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-681000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/19926/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/test/nested/copy/19926/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/test/nested/copy/19926/hosts": exit status 83 (50.239917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/test/nested/copy/19926/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-681000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-681000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.935375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/19926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/19926.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/19926.pem": exit status 83 (42.047917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/19926.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /etc/ssl/certs/19926.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/19926.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/19926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /usr/share/ca-certificates/19926.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /usr/share/ca-certificates/19926.pem": exit status 83 (41.643291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/19926.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /usr/share/ca-certificates/19926.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/19926.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.710792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/199262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/199262.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/199262.pem": exit status 83 (46.674833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/199262.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /etc/ssl/certs/199262.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/199262.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/199262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /usr/share/ca-certificates/199262.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /usr/share/ca-certificates/199262.pem": exit status 83 (40.871583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/199262.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /usr/share/ca-certificates/199262.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/199262.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (43.580458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-681000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-681000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.35575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-681000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-681000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.179ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-681000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-681000 -n functional-681000: exit status 7 (31.55525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-681000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo systemctl is-active crio": exit status 83 (41.44275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 version -o=json --components: exit status 83 (41.944416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-681000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-681000 image ls --format short --alsologtostderr:
I0318 04:51:38.167324   20668 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:38.167483   20668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.167486   20668 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:38.167488   20668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.167608   20668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:38.168035   20668 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.168101   20668 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-681000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-681000 image ls --format table --alsologtostderr:
I0318 04:51:38.399569   20680 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:38.399714   20680 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.399719   20680 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:38.399721   20680 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.399846   20680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:38.400316   20680 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.400376   20680 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-681000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-681000 image ls --format json --alsologtostderr:
I0318 04:51:38.361391   20678 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:38.361546   20678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.361549   20678 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:38.361551   20678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.361685   20678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:38.362076   20678 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.362135   20678 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-681000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-681000 image ls --format yaml --alsologtostderr:
I0318 04:51:38.204846   20670 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:38.205012   20670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.205015   20670 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:38.205018   20670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.205148   20670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:38.205532   20670 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.205594   20670 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh pgrep buildkitd: exit status 83 (42.652291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image build -t localhost/my-image:functional-681000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-681000 image build -t localhost/my-image:functional-681000 testdata/build --alsologtostderr:
I0318 04:51:38.285557   20674 out.go:291] Setting OutFile to fd 1 ...
I0318 04:51:38.286348   20674 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.286353   20674 out.go:304] Setting ErrFile to fd 2...
I0318 04:51:38.286355   20674 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:51:38.286494   20674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:51:38.286915   20674 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.287354   20674 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:51:38.287586   20674 build_images.go:133] succeeded building to: 
I0318 04:51:38.287589   20674 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
functional_test.go:442: expected "localhost/my-image:functional-681000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-681000 docker-env) && out/minikube-darwin-arm64 status -p functional-681000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-681000 docker-env) && out/minikube-darwin-arm64 status -p functional-681000": exit status 1 (46.376083ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2: exit status 83 (44.936959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:51:38.038636   20662 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:51:38.038992   20662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.038995   20662 out.go:304] Setting ErrFile to fd 2...
	I0318 04:51:38.038998   20662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.039125   20662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:51:38.039315   20662 mustload.go:65] Loading cluster: functional-681000
	I0318 04:51:38.039519   20662 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:51:38.044250   20662 out.go:177] * The control-plane node functional-681000 host is not running: state=Stopped
	I0318 04:51:38.048143   20662 out.go:177]   To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2: exit status 83 (41.302458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:51:38.125639   20666 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:51:38.125772   20666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.125776   20666 out.go:304] Setting ErrFile to fd 2...
	I0318 04:51:38.125778   20666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.125893   20666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:51:38.126106   20666 mustload.go:65] Loading cluster: functional-681000
	I0318 04:51:38.126309   20666 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:51:38.130355   20666 out.go:177] * The control-plane node functional-681000 host is not running: state=Stopped
	I0318 04:51:38.133108   20666 out.go:177]   To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2: exit status 83 (42.610708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:51:38.082891   20664 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:51:38.083036   20664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.083040   20664 out.go:304] Setting ErrFile to fd 2...
	I0318 04:51:38.083042   20664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:38.083166   20664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:51:38.083387   20664 mustload.go:65] Loading cluster: functional-681000
	I0318 04:51:38.083574   20664 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:51:38.088211   20664 out.go:177] * The control-plane node functional-681000 host is not running: state=Stopped
	I0318 04:51:38.091262   20664 out.go:177]   To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-681000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-681000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-681000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.993041ms)

                                                
                                                
** stderr ** 
	error: context "functional-681000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-681000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 service list: exit status 83 (51.430958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-681000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 service list -o json: exit status 83 (42.90975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-681000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 service --namespace=default --https --url hello-node: exit status 83 (44.74725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-681000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 service hello-node --url --format={{.IP}}: exit status 83 (44.745208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-681000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 service hello-node --url: exit status 83 (44.911292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-681000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test.go:1565: failed to parse "* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"": parse "* The control-plane node functional-681000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-681000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0318 04:50:46.939856   20440 out.go:291] Setting OutFile to fd 1 ...
I0318 04:50:46.940012   20440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:46.940016   20440 out.go:304] Setting ErrFile to fd 2...
I0318 04:50:46.940018   20440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:50:46.940174   20440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:50:46.940408   20440 mustload.go:65] Loading cluster: functional-681000
I0318 04:50:46.940604   20440 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:50:46.945313   20440 out.go:177] * The control-plane node functional-681000 host is not running: state=Stopped
I0318 04:50:46.952252   20440 out.go:177]   To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
stdout: * The control-plane node functional-681000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-681000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 20441: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-681000": client config: context "functional-681000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-681000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-681000 get svc nginx-svc: exit status 1 (69.944917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-681000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-681000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr: (1.322264667s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-681000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr: (1.300041875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-681000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.398738s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-681000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 image load --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr: (1.176439375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-681000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image save gcr.io/google-containers/addon-resizer:functional-681000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-681000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029998167s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-404000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-404000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.792666292s)

                                                
                                                
-- stdout --
	* [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:53:29.517861   20715 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:53:29.518023   20715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:53:29.518026   20715 out.go:304] Setting ErrFile to fd 2...
	I0318 04:53:29.518028   20715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:53:29.518151   20715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:53:29.519170   20715 out.go:298] Setting JSON to false
	I0318 04:53:29.535555   20715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10382,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:53:29.535614   20715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:53:29.541448   20715 out.go:177] * [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:53:29.550405   20715 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:53:29.550453   20715 notify.go:220] Checking for updates...
	I0318 04:53:29.554298   20715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:53:29.557356   20715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:53:29.560499   20715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:53:29.561924   20715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:53:29.565393   20715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:53:29.568495   20715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:53:29.572222   20715 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:53:29.579336   20715 start.go:297] selected driver: qemu2
	I0318 04:53:29.579342   20715 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:53:29.579349   20715 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:53:29.581657   20715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:53:29.585370   20715 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:53:29.589384   20715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:53:29.589422   20715 cni.go:84] Creating CNI manager for ""
	I0318 04:53:29.589426   20715 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 04:53:29.589434   20715 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:53:29.589460   20715 start.go:340] cluster config:
	{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:53:29.593895   20715 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:53:29.602365   20715 out.go:177] * Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	I0318 04:53:29.606358   20715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:53:29.606374   20715 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:53:29.606389   20715 cache.go:56] Caching tarball of preloaded images
	I0318 04:53:29.606446   20715 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:53:29.606451   20715 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:53:29.606714   20715 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/ha-404000/config.json ...
	I0318 04:53:29.606726   20715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/ha-404000/config.json: {Name:mkc39f2f8c3c081156d495813edb7d37b3273b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:53:29.606938   20715 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:53:29.606969   20715 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "ha-404000"
	I0318 04:53:29.606981   20715 start.go:93] Provisioning new machine with config: &{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:53:29.607011   20715 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:53:29.616354   20715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:53:29.633206   20715 start.go:159] libmachine.API.Create for "ha-404000" (driver="qemu2")
	I0318 04:53:29.633231   20715 client.go:168] LocalClient.Create starting
	I0318 04:53:29.633293   20715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:53:29.633327   20715 main.go:141] libmachine: Decoding PEM data...
	I0318 04:53:29.633337   20715 main.go:141] libmachine: Parsing certificate...
	I0318 04:53:29.633379   20715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:53:29.633404   20715 main.go:141] libmachine: Decoding PEM data...
	I0318 04:53:29.633411   20715 main.go:141] libmachine: Parsing certificate...
	I0318 04:53:29.633839   20715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:53:29.776564   20715 main.go:141] libmachine: Creating SSH key...
	I0318 04:53:29.824929   20715 main.go:141] libmachine: Creating Disk image...
	I0318 04:53:29.824938   20715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:53:29.825129   20715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:29.837262   20715 main.go:141] libmachine: STDOUT: 
	I0318 04:53:29.837288   20715 main.go:141] libmachine: STDERR: 
	I0318 04:53:29.837341   20715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2 +20000M
	I0318 04:53:29.848159   20715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:53:29.848172   20715 main.go:141] libmachine: STDERR: 
	I0318 04:53:29.848192   20715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:29.848201   20715 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:53:29.848226   20715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:bd:75:49:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:29.849889   20715 main.go:141] libmachine: STDOUT: 
	I0318 04:53:29.849905   20715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:53:29.849927   20715 client.go:171] duration metric: took 216.696458ms to LocalClient.Create
	I0318 04:53:31.852117   20715 start.go:128] duration metric: took 2.245137583s to createHost
	I0318 04:53:31.852193   20715 start.go:83] releasing machines lock for "ha-404000", held for 2.245286333s
	W0318 04:53:31.852256   20715 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:53:31.867475   20715 out.go:177] * Deleting "ha-404000" in qemu2 ...
	W0318 04:53:31.895912   20715 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:53:31.895955   20715 start.go:728] Will try again in 5 seconds ...
	I0318 04:53:36.897872   20715 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:53:36.898245   20715 start.go:364] duration metric: took 230.917µs to acquireMachinesLock for "ha-404000"
	I0318 04:53:36.898326   20715 start.go:93] Provisioning new machine with config: &{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:53:36.898582   20715 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:53:36.907147   20715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:53:36.956280   20715 start.go:159] libmachine.API.Create for "ha-404000" (driver="qemu2")
	I0318 04:53:36.956335   20715 client.go:168] LocalClient.Create starting
	I0318 04:53:36.956446   20715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:53:36.956518   20715 main.go:141] libmachine: Decoding PEM data...
	I0318 04:53:36.956536   20715 main.go:141] libmachine: Parsing certificate...
	I0318 04:53:36.956608   20715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:53:36.956650   20715 main.go:141] libmachine: Decoding PEM data...
	I0318 04:53:36.956674   20715 main.go:141] libmachine: Parsing certificate...
	I0318 04:53:36.957388   20715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:53:37.110482   20715 main.go:141] libmachine: Creating SSH key...
	I0318 04:53:37.204404   20715 main.go:141] libmachine: Creating Disk image...
	I0318 04:53:37.204414   20715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:53:37.204599   20715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:37.217245   20715 main.go:141] libmachine: STDOUT: 
	I0318 04:53:37.217264   20715 main.go:141] libmachine: STDERR: 
	I0318 04:53:37.217315   20715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2 +20000M
	I0318 04:53:37.228323   20715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:53:37.228348   20715 main.go:141] libmachine: STDERR: 
	I0318 04:53:37.228359   20715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:37.228366   20715 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:53:37.228412   20715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:28:39:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:53:37.230269   20715 main.go:141] libmachine: STDOUT: 
	I0318 04:53:37.230285   20715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:53:37.230296   20715 client.go:171] duration metric: took 273.963292ms to LocalClient.Create
	I0318 04:53:39.232558   20715 start.go:128] duration metric: took 2.33391125s to createHost
	I0318 04:53:39.232665   20715 start.go:83] releasing machines lock for "ha-404000", held for 2.334470041s
	W0318 04:53:39.233111   20715 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:53:39.246651   20715 out.go:177] 
	W0318 04:53:39.249859   20715 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:53:39.249906   20715 out.go:239] * 
	* 
	W0318 04:53:39.252389   20715 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:53:39.264689   20715 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-404000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (68.922458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (87.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.520541ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-404000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- rollout status deployment/busybox: exit status 1 (57.814209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.06575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.275125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.286667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.900833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.760333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.781083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.515167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.291834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.081584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.943625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.866041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.438333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.441791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.994583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.010208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (87.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-404000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.130291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-404000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.045416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-404000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-404000 -v=7 --alsologtostderr: exit status 83 (46.5455ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:06.889080   20796 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:06.889620   20796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:06.889624   20796 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:06.889626   20796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:06.889765   20796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:06.889980   20796 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:06.890167   20796 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:06.895686   20796 out.go:177] * The control-plane node ha-404000 host is not running: state=Stopped
	I0318 04:55:06.900567   20796 out.go:177]   To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-404000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.422583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-404000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-404000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.803583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-404000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-404000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-404000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (30.897125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-404000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-404000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.205917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status --output json -v=7 --alsologtostderr: exit status 7 (31.299084ms)

                                                
                                                
-- stdout --
	{"Name":"ha-404000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:07.126234   20810 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:07.126365   20810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.126368   20810 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:07.126371   20810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.126504   20810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:07.126611   20810 out.go:298] Setting JSON to true
	I0318 04:55:07.126622   20810 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:07.126680   20810 notify.go:220] Checking for updates...
	I0318 04:55:07.126828   20810 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:07.126835   20810 status.go:255] checking status of ha-404000 ...
	I0318 04:55:07.127025   20810 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:07.127029   20810 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:07.127031   20810 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-404000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.243208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.257459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:07.190541   20814 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:07.190868   20814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.190872   20814 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:07.190875   20814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.190995   20814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:07.191229   20814 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:07.191442   20814 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:07.196114   20814 out.go:177] 
	W0318 04:55:07.200151   20814 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 04:55:07.200158   20814 out.go:239] * 
	* 
	W0318 04:55:07.203156   20814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:55:07.207084   20814 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-404000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (31.777875ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:07.242100   20816 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:07.242239   20816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.242243   20816 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:07.242245   20816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.242367   20816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:07.242492   20816 out.go:298] Setting JSON to false
	I0318 04:55:07.242503   20816 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:07.242543   20816 notify.go:220] Checking for updates...
	I0318 04:55:07.242739   20816 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:07.242748   20816 status.go:255] checking status of ha-404000 ...
	I0318 04:55:07.242994   20816 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:07.242998   20816 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:07.243000   20816 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.420542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-404000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.583334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 node start m02 -v=7 --alsologtostderr: exit status 85 (52.019708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:07.408221   20826 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:07.408603   20826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.408607   20826 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:07.408609   20826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.408774   20826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:07.408999   20826 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:07.409178   20826 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:07.413518   20826 out.go:177] 
	W0318 04:55:07.417500   20826 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 04:55:07.417505   20826 out.go:239] * 
	* 
	W0318 04:55:07.419682   20826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:55:07.424474   20826 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0318 04:55:07.408221   20826 out.go:291] Setting OutFile to fd 1 ...
I0318 04:55:07.408603   20826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:55:07.408607   20826 out.go:304] Setting ErrFile to fd 2...
I0318 04:55:07.408609   20826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:55:07.408774   20826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:55:07.408999   20826 mustload.go:65] Loading cluster: ha-404000
I0318 04:55:07.409178   20826 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:55:07.413518   20826 out.go:177] 
W0318 04:55:07.417500   20826 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0318 04:55:07.417505   20826 out.go:239] * 
* 
W0318 04:55:07.419682   20826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:55:07.424474   20826 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-404000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (32.289625ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:07.460070   20828 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:07.460218   20828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.460221   20828 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:07.460224   20828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:07.460360   20828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:07.460474   20828 out.go:298] Setting JSON to false
	I0318 04:55:07.460489   20828 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:07.460537   20828 notify.go:220] Checking for updates...
	I0318 04:55:07.460693   20828 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:07.460703   20828 status.go:255] checking status of ha-404000 ...
	I0318 04:55:07.460911   20828 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:07.460915   20828 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:07.460918   20828 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (74.313541ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:08.456665   20830 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:08.456824   20830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:08.456828   20830 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:08.456831   20830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:08.456986   20830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:08.457153   20830 out.go:298] Setting JSON to false
	I0318 04:55:08.457168   20830 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:08.457201   20830 notify.go:220] Checking for updates...
	I0318 04:55:08.457419   20830 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:08.457427   20830 status.go:255] checking status of ha-404000 ...
	I0318 04:55:08.457685   20830 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:08.457690   20830 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:08.457693   20830 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (77.963709ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:09.396775   20834 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:09.396967   20834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:09.396972   20834 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:09.396975   20834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:09.397134   20834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:09.397285   20834 out.go:298] Setting JSON to false
	I0318 04:55:09.397307   20834 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:09.397349   20834 notify.go:220] Checking for updates...
	I0318 04:55:09.397557   20834 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:09.397565   20834 status.go:255] checking status of ha-404000 ...
	I0318 04:55:09.397852   20834 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:09.397857   20834 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:09.397860   20834 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (75.914291ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:12.473964   20839 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:12.474117   20839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:12.474121   20839 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:12.474124   20839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:12.474299   20839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:12.474477   20839 out.go:298] Setting JSON to false
	I0318 04:55:12.474492   20839 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:12.474527   20839 notify.go:220] Checking for updates...
	I0318 04:55:12.474726   20839 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:12.474734   20839 status.go:255] checking status of ha-404000 ...
	I0318 04:55:12.475013   20839 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:12.475018   20839 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:12.475021   20839 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (77.312625ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:16.882407   20841 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:16.882603   20841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:16.882608   20841 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:16.882611   20841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:16.882782   20841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:16.882940   20841 out.go:298] Setting JSON to false
	I0318 04:55:16.882965   20841 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:16.882993   20841 notify.go:220] Checking for updates...
	I0318 04:55:16.883240   20841 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:16.883248   20841 status.go:255] checking status of ha-404000 ...
	I0318 04:55:16.883495   20841 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:16.883500   20841 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:16.883502   20841 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (76.41275ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:21.302962   20843 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:21.303153   20843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:21.303157   20843 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:21.303160   20843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:21.303313   20843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:21.303461   20843 out.go:298] Setting JSON to false
	I0318 04:55:21.303474   20843 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:21.303512   20843 notify.go:220] Checking for updates...
	I0318 04:55:21.303703   20843 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:21.303711   20843 status.go:255] checking status of ha-404000 ...
	I0318 04:55:21.303951   20843 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:21.303956   20843 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:21.303959   20843 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (74.152334ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:31.314223   20848 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:31.314429   20848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:31.314433   20848 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:31.314436   20848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:31.314600   20848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:31.314746   20848 out.go:298] Setting JSON to false
	I0318 04:55:31.314759   20848 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:31.314799   20848 notify.go:220] Checking for updates...
	I0318 04:55:31.315027   20848 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:31.315035   20848 status.go:255] checking status of ha-404000 ...
	I0318 04:55:31.315306   20848 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:31.315312   20848 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:31.315315   20848 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (77.121333ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:55:47.336230   20853 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:55:47.336425   20853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:47.336439   20853 out.go:304] Setting ErrFile to fd 2...
	I0318 04:55:47.336443   20853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:55:47.336597   20853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:55:47.336753   20853 out.go:298] Setting JSON to false
	I0318 04:55:47.336768   20853 mustload.go:65] Loading cluster: ha-404000
	I0318 04:55:47.336814   20853 notify.go:220] Checking for updates...
	I0318 04:55:47.336993   20853 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:55:47.337001   20853 status.go:255] checking status of ha-404000 ...
	I0318 04:55:47.337252   20853 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:55:47.337257   20853 status.go:343] host is not running, skipping remaining checks
	I0318 04:55:47.337260   20853 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (76.762041ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:01.724981   20857 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:01.725204   20857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:01.725209   20857 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:01.725212   20857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:01.725377   20857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:01.725546   20857 out.go:298] Setting JSON to false
	I0318 04:56:01.725562   20857 mustload.go:65] Loading cluster: ha-404000
	I0318 04:56:01.725594   20857 notify.go:220] Checking for updates...
	I0318 04:56:01.725814   20857 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:01.725823   20857 status.go:255] checking status of ha-404000 ...
	I0318 04:56:01.726120   20857 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:56:01.726125   20857 status.go:343] host is not running, skipping remaining checks
	I0318 04:56:01.726128   20857 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (34.394791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-404000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-404000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.325083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-404000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-404000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-404000 -v=7 --alsologtostderr: (3.580025708s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-404000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-404000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231149125s)

                                                
                                                
-- stdout --
	* [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	* Restarting existing qemu2 VM for "ha-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:05.547386   20889 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:05.547550   20889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:05.547555   20889 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:05.547558   20889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:05.547726   20889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:05.548908   20889 out.go:298] Setting JSON to false
	I0318 04:56:05.567824   20889 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10538,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:56:05.567886   20889 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:56:05.571874   20889 out.go:177] * [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:56:05.579861   20889 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:56:05.579899   20889 notify.go:220] Checking for updates...
	I0318 04:56:05.589894   20889 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:56:05.592880   20889 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:56:05.595874   20889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:56:05.598906   20889 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:56:05.600350   20889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:56:05.604200   20889 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:05.604263   20889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:56:05.608870   20889 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:56:05.613847   20889 start.go:297] selected driver: qemu2
	I0318 04:56:05.613853   20889 start.go:901] validating driver "qemu2" against &{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:56:05.613924   20889 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:56:05.616218   20889 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:56:05.616262   20889 cni.go:84] Creating CNI manager for ""
	I0318 04:56:05.616268   20889 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:56:05.616314   20889 start.go:340] cluster config:
	{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:56:05.620980   20889 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:56:05.627791   20889 out.go:177] * Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	I0318 04:56:05.631790   20889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:56:05.631807   20889 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:56:05.631821   20889 cache.go:56] Caching tarball of preloaded images
	I0318 04:56:05.631892   20889 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:56:05.631899   20889 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:56:05.631965   20889 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/ha-404000/config.json ...
	I0318 04:56:05.632313   20889 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:56:05.632350   20889 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "ha-404000"
	I0318 04:56:05.632359   20889 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:56:05.632368   20889 fix.go:54] fixHost starting: 
	I0318 04:56:05.632481   20889 fix.go:112] recreateIfNeeded on ha-404000: state=Stopped err=<nil>
	W0318 04:56:05.632492   20889 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:56:05.635769   20889 out.go:177] * Restarting existing qemu2 VM for "ha-404000" ...
	I0318 04:56:05.643902   20889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:28:39:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:56:05.646017   20889 main.go:141] libmachine: STDOUT: 
	I0318 04:56:05.646039   20889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:56:05.646070   20889 fix.go:56] duration metric: took 13.702167ms for fixHost
	I0318 04:56:05.646076   20889 start.go:83] releasing machines lock for "ha-404000", held for 13.722ms
	W0318 04:56:05.646083   20889 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:56:05.646120   20889 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:56:05.646125   20889 start.go:728] Will try again in 5 seconds ...
	I0318 04:56:10.648137   20889 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:56:10.648518   20889 start.go:364] duration metric: took 285.709µs to acquireMachinesLock for "ha-404000"
	I0318 04:56:10.648660   20889 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:56:10.648683   20889 fix.go:54] fixHost starting: 
	I0318 04:56:10.649417   20889 fix.go:112] recreateIfNeeded on ha-404000: state=Stopped err=<nil>
	W0318 04:56:10.649446   20889 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:56:10.658940   20889 out.go:177] * Restarting existing qemu2 VM for "ha-404000" ...
	I0318 04:56:10.664111   20889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:28:39:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:56:10.674535   20889 main.go:141] libmachine: STDOUT: 
	I0318 04:56:10.674624   20889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:56:10.674684   20889 fix.go:56] duration metric: took 26.007083ms for fixHost
	I0318 04:56:10.674704   20889 start.go:83] releasing machines lock for "ha-404000", held for 26.161375ms
	W0318 04:56:10.674895   20889 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:56:10.681924   20889 out.go:177] 
	W0318 04:56:10.685878   20889 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:56:10.685905   20889 out.go:239] * 
	* 
	W0318 04:56:10.688768   20889 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:56:10.697826   20889 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-404000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-404000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (34.20875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 node delete m03 -v=7 --alsologtostderr: exit status 83 (45.190208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:10.848919   20904 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:10.849281   20904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:10.849288   20904 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:10.849291   20904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:10.849483   20904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:10.849682   20904 mustload.go:65] Loading cluster: ha-404000
	I0318 04:56:10.849866   20904 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:10.854903   20904 out.go:177] * The control-plane node ha-404000 host is not running: state=Stopped
	I0318 04:56:10.858816   20904 out.go:177]   To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-404000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (32.3285ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:10.894506   20906 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:10.894649   20906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:10.894653   20906 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:10.894655   20906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:10.894784   20906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:10.894895   20906 out.go:298] Setting JSON to false
	I0318 04:56:10.894907   20906 mustload.go:65] Loading cluster: ha-404000
	I0318 04:56:10.894970   20906 notify.go:220] Checking for updates...
	I0318 04:56:10.895107   20906 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:10.895114   20906 status.go:255] checking status of ha-404000 ...
	I0318 04:56:10.895350   20906 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:56:10.895354   20906 status.go:343] host is not running, skipping remaining checks
	I0318 04:56:10.895356   20906 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.28225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-404000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.655833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-404000 stop -v=7 --alsologtostderr: (3.852674458s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr: exit status 7 (74.557417ms)

                                                
                                                
-- stdout --
	ha-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:14.961564   20936 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:14.961777   20936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:14.961782   20936 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:14.961785   20936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:14.961950   20936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:14.962113   20936 out.go:298] Setting JSON to false
	I0318 04:56:14.962136   20936 mustload.go:65] Loading cluster: ha-404000
	I0318 04:56:14.962167   20936 notify.go:220] Checking for updates...
	I0318 04:56:14.962406   20936 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:14.962418   20936 status.go:255] checking status of ha-404000 ...
	I0318 04:56:14.962714   20936 status.go:330] ha-404000 host status = "Stopped" (err=<nil>)
	I0318 04:56:14.962719   20936 status.go:343] host is not running, skipping remaining checks
	I0318 04:56:14.962722   20936 status.go:257] ha-404000 status: &{Name:ha-404000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-404000 status -v=7 --alsologtostderr": ha-404000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (34.157416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-404000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-404000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186171625s)

                                                
                                                
-- stdout --
	* [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	* Restarting existing qemu2 VM for "ha-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:15.028111   20940 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:15.028248   20940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:15.028251   20940 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:15.028253   20940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:15.028407   20940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:15.029410   20940 out.go:298] Setting JSON to false
	I0318 04:56:15.045413   20940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10548,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:56:15.045487   20940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:56:15.049857   20940 out.go:177] * [ha-404000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:56:15.058715   20940 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:56:15.058782   20940 notify.go:220] Checking for updates...
	I0318 04:56:15.062802   20940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:56:15.066757   20940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:56:15.069806   20940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:56:15.073694   20940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:56:15.076805   20940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:56:15.080075   20940 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:15.080347   20940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:56:15.084800   20940 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:56:15.091772   20940 start.go:297] selected driver: qemu2
	I0318 04:56:15.091779   20940 start.go:901] validating driver "qemu2" against &{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:56:15.091851   20940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:56:15.094153   20940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:56:15.094195   20940 cni.go:84] Creating CNI manager for ""
	I0318 04:56:15.094201   20940 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:56:15.094253   20940 start.go:340] cluster config:
	{Name:ha-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-404000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:56:15.098720   20940 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:56:15.104762   20940 out.go:177] * Starting "ha-404000" primary control-plane node in "ha-404000" cluster
	I0318 04:56:15.108775   20940 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:56:15.108789   20940 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:56:15.108801   20940 cache.go:56] Caching tarball of preloaded images
	I0318 04:56:15.108861   20940 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:56:15.108867   20940 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:56:15.108928   20940 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/ha-404000/config.json ...
	I0318 04:56:15.109344   20940 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:56:15.109370   20940 start.go:364] duration metric: took 19.959µs to acquireMachinesLock for "ha-404000"
	I0318 04:56:15.109379   20940 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:56:15.109384   20940 fix.go:54] fixHost starting: 
	I0318 04:56:15.109495   20940 fix.go:112] recreateIfNeeded on ha-404000: state=Stopped err=<nil>
	W0318 04:56:15.109503   20940 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:56:15.116769   20940 out.go:177] * Restarting existing qemu2 VM for "ha-404000" ...
	I0318 04:56:15.120761   20940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:28:39:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:56:15.122960   20940 main.go:141] libmachine: STDOUT: 
	I0318 04:56:15.122980   20940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:56:15.123006   20940 fix.go:56] duration metric: took 13.62125ms for fixHost
	I0318 04:56:15.123011   20940 start.go:83] releasing machines lock for "ha-404000", held for 13.637834ms
	W0318 04:56:15.123018   20940 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:56:15.123048   20940 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:56:15.123053   20940 start.go:728] Will try again in 5 seconds ...
	I0318 04:56:20.125105   20940 start.go:360] acquireMachinesLock for ha-404000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:56:20.125450   20940 start.go:364] duration metric: took 219.875µs to acquireMachinesLock for "ha-404000"
	I0318 04:56:20.125596   20940 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:56:20.125614   20940 fix.go:54] fixHost starting: 
	I0318 04:56:20.126323   20940 fix.go:112] recreateIfNeeded on ha-404000: state=Stopped err=<nil>
	W0318 04:56:20.126354   20940 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:56:20.131053   20940 out.go:177] * Restarting existing qemu2 VM for "ha-404000" ...
	I0318 04:56:20.136002   20940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:28:39:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/ha-404000/disk.qcow2
	I0318 04:56:20.145859   20940 main.go:141] libmachine: STDOUT: 
	I0318 04:56:20.145925   20940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:56:20.146048   20940 fix.go:56] duration metric: took 20.392125ms for fixHost
	I0318 04:56:20.146069   20940 start.go:83] releasing machines lock for "ha-404000", held for 20.58325ms
	W0318 04:56:20.146244   20940 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:56:20.154790   20940 out.go:177] 
	W0318 04:56:20.157951   20940 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:56:20.157975   20940 out.go:239] * 
	* 
	W0318 04:56:20.160666   20940 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:56:20.169955   20940 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-404000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (67.70975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-404000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.0395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-404000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-404000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.563583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-404000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:56:20.388861   20956 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:56:20.389012   20956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:20.389016   20956 out.go:304] Setting ErrFile to fd 2...
	I0318 04:56:20.389018   20956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:56:20.389171   20956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:56:20.389417   20956 mustload.go:65] Loading cluster: ha-404000
	I0318 04:56:20.389638   20956 config.go:182] Loaded profile config "ha-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:56:20.393411   20956 out.go:177] * The control-plane node ha-404000 host is not running: state=Stopped
	I0318 04:56:20.397150   20956 out.go:177]   To start a cluster, run: "minikube start -p ha-404000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-404000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (32.061583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-404000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-404000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-404000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-404000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-404000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-404000 -n ha-404000: exit status 7 (31.612875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-906000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-906000 --driver=qemu2 : exit status 80 (9.842864083s)

                                                
                                                
-- stdout --
	* [image-906000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-906000" primary control-plane node in "image-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-906000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-906000 -n image-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-906000 -n image-906000: exit status 7 (74.35625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.726933542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa7af119-ffcd-425f-840a-3439eb85e29c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-370000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ae5e2c5-3b9a-4f28-8c3d-de49dcda4309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18427"}}
	{"specversion":"1.0","id":"722ad1f3-9cd5-494a-991a-c6844b6fae8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig"}}
	{"specversion":"1.0","id":"26f67c87-6b80-4f0e-b126-47a1b5393575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"aca0fe98-9951-4a85-ba73-0a436db48894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ebb31942-cf3f-476a-b32a-3422827c4fa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube"}}
	{"specversion":"1.0","id":"4b1aed26-e9a8-4f22-a65c-d09c9f5bf939","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"803d3e8a-5d96-40e5-af1a-d18443fcbee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd4c9125-0295-44a5-b812-08cc3ad59247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ab0c675f-22b1-4018-8c8e-c2a2d067d807","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-370000\" primary control-plane node in \"json-output-370000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c19eadd-19e7-4547-985e-48957678c111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"52e97d11-0aed-4082-afe0-06cce55b99ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-370000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"a841a3fe-f2bb-40eb-98e2-b54aadaf753c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6830139a-9883-4fca-9e8b-070a81551633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"01a666d8-0bd5-49d3-a93e-1c9a3ccf4b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-370000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f1eed85d-184d-4b47-b421-5141f1e09823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"2627fcca-8347-4f31-aba8-ee3166464290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-370000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser: exit status 83 (78.877333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c003e018-a897-4202-a860-c3764817300d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-370000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8c22730b-4353-4431-a3e3-821a5d01e11a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-370000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-370000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser: exit status 83 (47.339375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-370000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-370000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-370000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-370000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 : exit status 80 (9.772537875s)

                                                
                                                
-- stdout --
	* [first-999000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-999000" primary control-plane node in "first-999000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-999000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 04:56:54.043806 -0700 PDT m=+533.033306251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-001000 -n second-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-001000 -n second-001000: exit status 85 (81.891416ms)

                                                
                                                
-- stdout --
	* Profile "second-001000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-001000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-001000" host is not running, skipping log retrieval (state="* Profile \"second-001000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-001000\"")
helpers_test.go:175: Cleaning up "second-001000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-001000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 04:56:54.359889 -0700 PDT m=+533.349399001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-999000 -n first-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-999000 -n first-999000: exit status 7 (32.250292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-999000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-999000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-999000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.588313667s)

                                                
                                                
-- stdout --
	* [mount-start-1-810000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-810000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-810000 -n mount-start-1-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-810000 -n mount-start-1-810000: exit status 7 (69.827416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-730000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-730000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.806560917s)

                                                
                                                
-- stdout --
	* [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:57:05.516885   21122 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:57:05.517054   21122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:57:05.517061   21122 out.go:304] Setting ErrFile to fd 2...
	I0318 04:57:05.517063   21122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:57:05.517188   21122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:57:05.518236   21122 out.go:298] Setting JSON to false
	I0318 04:57:05.534321   21122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10598,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:57:05.534375   21122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:57:05.540604   21122 out.go:177] * [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:57:05.553524   21122 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:57:05.548682   21122 notify.go:220] Checking for updates...
	I0318 04:57:05.560505   21122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:57:05.564567   21122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:57:05.567560   21122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:57:05.570553   21122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:57:05.573538   21122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:57:05.576670   21122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:57:05.580528   21122 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:57:05.586526   21122 start.go:297] selected driver: qemu2
	I0318 04:57:05.586532   21122 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:57:05.586542   21122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:57:05.588876   21122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:57:05.593480   21122 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:57:05.596588   21122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:57:05.596624   21122 cni.go:84] Creating CNI manager for ""
	I0318 04:57:05.596629   21122 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 04:57:05.596633   21122 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:57:05.596669   21122 start.go:340] cluster config:
	{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:57:05.601447   21122 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:57:05.608542   21122 out.go:177] * Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	I0318 04:57:05.612528   21122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:57:05.612544   21122 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:57:05.612559   21122 cache.go:56] Caching tarball of preloaded images
	I0318 04:57:05.612617   21122 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:57:05.612626   21122 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:57:05.612866   21122 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/multinode-730000/config.json ...
	I0318 04:57:05.612877   21122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/multinode-730000/config.json: {Name:mk4b2cea5abb37e4b3fc60c7219333337a047e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:57:05.613105   21122 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:57:05.613138   21122 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "multinode-730000"
	I0318 04:57:05.613151   21122 start.go:93] Provisioning new machine with config: &{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:57:05.613181   21122 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:57:05.621377   21122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:57:05.639051   21122 start.go:159] libmachine.API.Create for "multinode-730000" (driver="qemu2")
	I0318 04:57:05.639077   21122 client.go:168] LocalClient.Create starting
	I0318 04:57:05.639138   21122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:57:05.639168   21122 main.go:141] libmachine: Decoding PEM data...
	I0318 04:57:05.639181   21122 main.go:141] libmachine: Parsing certificate...
	I0318 04:57:05.639227   21122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:57:05.639250   21122 main.go:141] libmachine: Decoding PEM data...
	I0318 04:57:05.639258   21122 main.go:141] libmachine: Parsing certificate...
	I0318 04:57:05.639697   21122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:57:05.798270   21122 main.go:141] libmachine: Creating SSH key...
	I0318 04:57:05.898136   21122 main.go:141] libmachine: Creating Disk image...
	I0318 04:57:05.898143   21122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:57:05.898347   21122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:05.910750   21122 main.go:141] libmachine: STDOUT: 
	I0318 04:57:05.910773   21122 main.go:141] libmachine: STDERR: 
	I0318 04:57:05.910835   21122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2 +20000M
	I0318 04:57:05.921606   21122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:57:05.921639   21122 main.go:141] libmachine: STDERR: 
	I0318 04:57:05.921657   21122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:05.921661   21122 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:57:05.921696   21122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:3c:11:1c:01:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:05.923434   21122 main.go:141] libmachine: STDOUT: 
	I0318 04:57:05.923450   21122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:57:05.923471   21122 client.go:171] duration metric: took 284.397709ms to LocalClient.Create
	I0318 04:57:07.925640   21122 start.go:128] duration metric: took 2.312509542s to createHost
	I0318 04:57:07.925746   21122 start.go:83] releasing machines lock for "multinode-730000", held for 2.312646416s
	W0318 04:57:07.925824   21122 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:57:07.936135   21122 out.go:177] * Deleting "multinode-730000" in qemu2 ...
	W0318 04:57:07.961732   21122 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:57:07.961778   21122 start.go:728] Will try again in 5 seconds ...
	I0318 04:57:12.963962   21122 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:57:12.964454   21122 start.go:364] duration metric: took 353.208µs to acquireMachinesLock for "multinode-730000"
	I0318 04:57:12.964609   21122 start.go:93] Provisioning new machine with config: &{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:57:12.964896   21122 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:57:12.976628   21122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:57:13.025006   21122 start.go:159] libmachine.API.Create for "multinode-730000" (driver="qemu2")
	I0318 04:57:13.025052   21122 client.go:168] LocalClient.Create starting
	I0318 04:57:13.025152   21122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 04:57:13.025210   21122 main.go:141] libmachine: Decoding PEM data...
	I0318 04:57:13.025227   21122 main.go:141] libmachine: Parsing certificate...
	I0318 04:57:13.025290   21122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 04:57:13.025332   21122 main.go:141] libmachine: Decoding PEM data...
	I0318 04:57:13.025343   21122 main.go:141] libmachine: Parsing certificate...
	I0318 04:57:13.025928   21122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:57:13.174462   21122 main.go:141] libmachine: Creating SSH key...
	I0318 04:57:13.219605   21122 main.go:141] libmachine: Creating Disk image...
	I0318 04:57:13.219610   21122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:57:13.219814   21122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:13.232096   21122 main.go:141] libmachine: STDOUT: 
	I0318 04:57:13.232117   21122 main.go:141] libmachine: STDERR: 
	I0318 04:57:13.232166   21122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2 +20000M
	I0318 04:57:13.243143   21122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:57:13.243162   21122 main.go:141] libmachine: STDERR: 
	I0318 04:57:13.243173   21122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:13.243186   21122 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:57:13.243221   21122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2f:59:63:d3:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:57:13.245043   21122 main.go:141] libmachine: STDOUT: 
	I0318 04:57:13.245059   21122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:57:13.245073   21122 client.go:171] duration metric: took 220.023167ms to LocalClient.Create
	I0318 04:57:15.247295   21122 start.go:128] duration metric: took 2.282407792s to createHost
	I0318 04:57:15.247391   21122 start.go:83] releasing machines lock for "multinode-730000", held for 2.282985916s
	W0318 04:57:15.247986   21122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:57:15.263705   21122 out.go:177] 
	W0318 04:57:15.266842   21122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:57:15.266909   21122 out.go:239] * 
	* 
	W0318 04:57:15.269321   21122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:57:15.278698   21122 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-730000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (68.687792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (103.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.882ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-730000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- rollout status deployment/busybox: exit status 1 (58.452667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.581292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.203ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.904292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.531166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.921375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.215875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.569042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.038334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.93225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.471708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.227542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.036417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.317083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.683875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.669083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (103.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-730000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.209708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.077125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-730000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-730000 -v 3 --alsologtostderr: exit status 83 (40.871167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-730000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:58:59.108263   21208 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:58:59.108431   21208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.108434   21208 out.go:304] Setting ErrFile to fd 2...
	I0318 04:58:59.108437   21208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.108558   21208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:58:59.108806   21208 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:58:59.108995   21208 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:58:59.113436   21208 out.go:177] * The control-plane node multinode-730000 host is not running: state=Stopped
	I0318 04:58:59.116500   21208 out.go:177]   To start a cluster, run: "minikube start -p multinode-730000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-730000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.234583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-730000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-730000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.483541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-730000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-730000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-730000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.576042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-730000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-730000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-730000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-730000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.056042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status --output json --alsologtostderr: exit status 7 (31.910958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-730000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:58:59.346046   21221 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:58:59.346204   21221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.346207   21221 out.go:304] Setting ErrFile to fd 2...
	I0318 04:58:59.346209   21221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.346365   21221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:58:59.346489   21221 out.go:298] Setting JSON to true
	I0318 04:58:59.346501   21221 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:58:59.346560   21221 notify.go:220] Checking for updates...
	I0318 04:58:59.346694   21221 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:58:59.346701   21221 status.go:255] checking status of multinode-730000 ...
	I0318 04:58:59.346907   21221 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:58:59.346911   21221 status.go:343] host is not running, skipping remaining checks
	I0318 04:58:59.346914   21221 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-730000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.119375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 node stop m03: exit status 85 (51.798708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-730000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status: exit status 7 (32.270667ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr: exit status 7 (31.725541ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:58:59.494789   21229 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:58:59.494937   21229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.494940   21229 out.go:304] Setting ErrFile to fd 2...
	I0318 04:58:59.494943   21229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.495070   21229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:58:59.495185   21229 out.go:298] Setting JSON to false
	I0318 04:58:59.495199   21229 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:58:59.495253   21229 notify.go:220] Checking for updates...
	I0318 04:58:59.495422   21229 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:58:59.495429   21229 status.go:255] checking status of multinode-730000 ...
	I0318 04:58:59.495630   21229 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:58:59.495633   21229 status.go:343] host is not running, skipping remaining checks
	I0318 04:58:59.495636   21229 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr": multinode-730000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.189208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.364917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:58:59.559605   21233 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:58:59.559986   21233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.559990   21233 out.go:304] Setting ErrFile to fd 2...
	I0318 04:58:59.559992   21233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.560140   21233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:58:59.560391   21233 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:58:59.560582   21233 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:58:59.565432   21233 out.go:177] 
	W0318 04:58:59.569471   21233 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0318 04:58:59.569477   21233 out.go:239] * 
	* 
	W0318 04:58:59.571674   21233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:58:59.575384   21233 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0318 04:58:59.559605   21233 out.go:291] Setting OutFile to fd 1 ...
I0318 04:58:59.559986   21233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:58:59.559990   21233 out.go:304] Setting ErrFile to fd 2...
I0318 04:58:59.559992   21233 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:58:59.560140   21233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
I0318 04:58:59.560391   21233 mustload.go:65] Loading cluster: multinode-730000
I0318 04:58:59.560582   21233 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:58:59.565432   21233 out.go:177] 
W0318 04:58:59.569471   21233 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0318 04:58:59.569477   21233 out.go:239] * 
* 
W0318 04:58:59.571674   21233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:58:59.575384   21233 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-730000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (32.223583ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:58:59.611045   21235 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:58:59.611171   21235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.611174   21235 out.go:304] Setting ErrFile to fd 2...
	I0318 04:58:59.611177   21235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:58:59.611302   21235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:58:59.611428   21235 out.go:298] Setting JSON to false
	I0318 04:58:59.611440   21235 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:58:59.611488   21235 notify.go:220] Checking for updates...
	I0318 04:58:59.611653   21235 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:58:59.611660   21235 status.go:255] checking status of multinode-730000 ...
	I0318 04:58:59.611863   21235 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:58:59.611867   21235 status.go:343] host is not running, skipping remaining checks
	I0318 04:58:59.611869   21235 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (77.881459ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:00.511426   21237 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:00.511601   21237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:00.511605   21237 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:00.511608   21237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:00.511771   21237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:00.511907   21237 out.go:298] Setting JSON to false
	I0318 04:59:00.511922   21237 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:00.511946   21237 notify.go:220] Checking for updates...
	I0318 04:59:00.512181   21237 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:00.512188   21237 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:00.512432   21237 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:00.512437   21237 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:00.512440   21237 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (76.981792ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:01.501513   21239 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:01.501690   21239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:01.501694   21239 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:01.501698   21239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:01.501873   21239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:01.502048   21239 out.go:298] Setting JSON to false
	I0318 04:59:01.502064   21239 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:01.502102   21239 notify.go:220] Checking for updates...
	I0318 04:59:01.502324   21239 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:01.502335   21239 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:01.502588   21239 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:01.502592   21239 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:01.502595   21239 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (75.861ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:04.861375   21241 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:04.861564   21241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:04.861569   21241 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:04.861572   21241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:04.861747   21241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:04.861900   21241 out.go:298] Setting JSON to false
	I0318 04:59:04.861915   21241 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:04.861989   21241 notify.go:220] Checking for updates...
	I0318 04:59:04.862166   21241 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:04.862172   21241 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:04.862436   21241 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:04.862440   21241 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:04.862443   21241 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (76.16375ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:07.197171   21243 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:07.197335   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:07.197339   21243 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:07.197343   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:07.197516   21243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:07.197667   21243 out.go:298] Setting JSON to false
	I0318 04:59:07.197682   21243 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:07.197726   21243 notify.go:220] Checking for updates...
	I0318 04:59:07.197935   21243 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:07.197944   21243 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:07.198211   21243 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:07.198216   21243 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:07.198219   21243 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (75.286833ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:13.964992   21248 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:13.965151   21248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:13.965156   21248 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:13.965159   21248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:13.965308   21248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:13.965484   21248 out.go:298] Setting JSON to false
	I0318 04:59:13.965503   21248 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:13.965538   21248 notify.go:220] Checking for updates...
	I0318 04:59:13.965801   21248 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:13.965812   21248 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:13.966079   21248 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:13.966084   21248 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:13.966086   21248 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (77.146542ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:22.815113   21250 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:22.815301   21250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:22.815305   21250 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:22.815309   21250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:22.815465   21250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:22.815635   21250 out.go:298] Setting JSON to false
	I0318 04:59:22.815650   21250 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:22.815684   21250 notify.go:220] Checking for updates...
	I0318 04:59:22.815926   21250 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:22.815934   21250 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:22.816201   21250 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:22.816206   21250 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:22.816209   21250 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (77.747208ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:34.189701   21252 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:34.189887   21252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:34.189891   21252 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:34.189894   21252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:34.190060   21252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:34.190226   21252 out.go:298] Setting JSON to false
	I0318 04:59:34.190241   21252 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:34.190277   21252 notify.go:220] Checking for updates...
	I0318 04:59:34.190501   21252 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:34.190510   21252 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:34.190785   21252 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:34.190790   21252 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:34.190793   21252 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr: exit status 7 (77.19625ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:45.897183   21257 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:45.897656   21257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:45.897663   21257 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:45.897666   21257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:45.897902   21257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:45.898085   21257 out.go:298] Setting JSON to false
	I0318 04:59:45.898099   21257 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:45.898267   21257 notify.go:220] Checking for updates...
	I0318 04:59:45.898663   21257 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:45.898675   21257 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:45.898931   21257 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:45.898936   21257 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:45.898939   21257 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-730000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (34.643833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-730000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-730000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-730000: (1.983001875s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-730000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-730000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224216833s)

                                                
                                                
-- stdout --
	* [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	* Restarting existing qemu2 VM for "multinode-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:48.017254   21273 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:48.017407   21273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:48.017411   21273 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:48.017414   21273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:48.017554   21273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:48.018740   21273 out.go:298] Setting JSON to false
	I0318 04:59:48.037288   21273 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10761,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:59:48.037348   21273 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:59:48.042679   21273 out.go:177] * [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:59:48.049794   21273 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:59:48.052745   21273 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:59:48.049828   21273 notify.go:220] Checking for updates...
	I0318 04:59:48.059709   21273 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:59:48.062635   21273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:59:48.065699   21273 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:59:48.068691   21273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:59:48.072074   21273 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:48.072130   21273 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:59:48.076718   21273 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:59:48.083600   21273 start.go:297] selected driver: qemu2
	I0318 04:59:48.083606   21273 start.go:901] validating driver "qemu2" against &{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:59:48.083671   21273 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:59:48.085911   21273 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:59:48.085963   21273 cni.go:84] Creating CNI manager for ""
	I0318 04:59:48.085969   21273 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:59:48.086025   21273 start.go:340] cluster config:
	{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:59:48.090302   21273 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:59:48.097700   21273 out.go:177] * Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	I0318 04:59:48.101678   21273 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:59:48.101693   21273 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:59:48.101700   21273 cache.go:56] Caching tarball of preloaded images
	I0318 04:59:48.101763   21273 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:59:48.101769   21273 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:59:48.101835   21273 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/multinode-730000/config.json ...
	I0318 04:59:48.102314   21273 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:59:48.102346   21273 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "multinode-730000"
	I0318 04:59:48.102356   21273 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:59:48.102361   21273 fix.go:54] fixHost starting: 
	I0318 04:59:48.102475   21273 fix.go:112] recreateIfNeeded on multinode-730000: state=Stopped err=<nil>
	W0318 04:59:48.102484   21273 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:59:48.106704   21273 out.go:177] * Restarting existing qemu2 VM for "multinode-730000" ...
	I0318 04:59:48.114667   21273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2f:59:63:d3:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:59:48.116702   21273 main.go:141] libmachine: STDOUT: 
	I0318 04:59:48.116727   21273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:59:48.116756   21273 fix.go:56] duration metric: took 14.394583ms for fixHost
	I0318 04:59:48.116762   21273 start.go:83] releasing machines lock for "multinode-730000", held for 14.411792ms
	W0318 04:59:48.116770   21273 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:59:48.116803   21273 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:59:48.116808   21273 start.go:728] Will try again in 5 seconds ...
	I0318 04:59:53.118800   21273 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:59:53.119156   21273 start.go:364] duration metric: took 270.708µs to acquireMachinesLock for "multinode-730000"
	I0318 04:59:53.119285   21273 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:59:53.119299   21273 fix.go:54] fixHost starting: 
	I0318 04:59:53.119930   21273 fix.go:112] recreateIfNeeded on multinode-730000: state=Stopped err=<nil>
	W0318 04:59:53.119954   21273 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:59:53.125313   21273 out.go:177] * Restarting existing qemu2 VM for "multinode-730000" ...
	I0318 04:59:53.132347   21273 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2f:59:63:d3:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:59:53.140158   21273 main.go:141] libmachine: STDOUT: 
	I0318 04:59:53.140220   21273 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:59:53.140290   21273 fix.go:56] duration metric: took 20.989709ms for fixHost
	I0318 04:59:53.140311   21273 start.go:83] releasing machines lock for "multinode-730000", held for 21.134834ms
	W0318 04:59:53.140546   21273 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:59:53.148329   21273 out.go:177] 
	W0318 04:59:53.152278   21273 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:59:53.152319   21273 out.go:239] * 
	* 
	W0318 04:59:53.154186   21273 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:59:53.160244   21273 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-730000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-730000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (33.83025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 node delete m03: exit status 83 (44.568083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-730000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-730000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr: exit status 7 (31.822292ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:53.350185   21287 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:53.350370   21287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:53.350373   21287 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:53.350375   21287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:53.350489   21287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:53.350597   21287 out.go:298] Setting JSON to false
	I0318 04:59:53.350612   21287 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:53.350659   21287 notify.go:220] Checking for updates...
	I0318 04:59:53.350827   21287 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:53.350834   21287 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:53.351029   21287 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:53.351033   21287 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:53.351036   21287 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.15025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-730000 stop: (3.972211208s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status: exit status 7 (70.622166ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr: exit status 7 (34.025958ms)

                                                
                                                
-- stdout --
	multinode-730000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:57.459782   21317 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:57.459925   21317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:57.459929   21317 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:57.459931   21317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:57.460072   21317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:57.460192   21317 out.go:298] Setting JSON to false
	I0318 04:59:57.460204   21317 mustload.go:65] Loading cluster: multinode-730000
	I0318 04:59:57.460263   21317 notify.go:220] Checking for updates...
	I0318 04:59:57.460429   21317 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:57.460435   21317 status.go:255] checking status of multinode-730000 ...
	I0318 04:59:57.460621   21317 status.go:330] multinode-730000 host status = "Stopped" (err=<nil>)
	I0318 04:59:57.460625   21317 status.go:343] host is not running, skipping remaining checks
	I0318 04:59:57.460628   21317 status.go:257] multinode-730000 status: &{Name:multinode-730000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr": multinode-730000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-730000 status --alsologtostderr": multinode-730000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (32.180541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-730000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-730000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192021417s)

                                                
                                                
-- stdout --
	* [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	* Restarting existing qemu2 VM for "multinode-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:59:57.523946   21321 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:59:57.524069   21321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:57.524072   21321 out.go:304] Setting ErrFile to fd 2...
	I0318 04:59:57.524075   21321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:59:57.524228   21321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:59:57.525213   21321 out.go:298] Setting JSON to false
	I0318 04:59:57.541581   21321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10770,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:59:57.541651   21321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:59:57.546467   21321 out.go:177] * [multinode-730000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:59:57.554460   21321 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:59:57.558182   21321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:59:57.554523   21321 notify.go:220] Checking for updates...
	I0318 04:59:57.564378   21321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:59:57.567458   21321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:59:57.570354   21321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:59:57.573395   21321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:59:57.576719   21321 config.go:182] Loaded profile config "multinode-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:59:57.576966   21321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:59:57.581359   21321 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:59:57.588369   21321 start.go:297] selected driver: qemu2
	I0318 04:59:57.588375   21321 start.go:901] validating driver "qemu2" against &{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:59:57.588448   21321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:59:57.590583   21321 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:59:57.590635   21321 cni.go:84] Creating CNI manager for ""
	I0318 04:59:57.590640   21321 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:59:57.590684   21321 start.go:340] cluster config:
	{Name:multinode-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-730000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:59:57.595028   21321 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:59:57.603368   21321 out.go:177] * Starting "multinode-730000" primary control-plane node in "multinode-730000" cluster
	I0318 04:59:57.607241   21321 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:59:57.607253   21321 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:59:57.607261   21321 cache.go:56] Caching tarball of preloaded images
	I0318 04:59:57.607300   21321 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:59:57.607306   21321 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:59:57.607357   21321 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/multinode-730000/config.json ...
	I0318 04:59:57.607829   21321 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:59:57.607855   21321 start.go:364] duration metric: took 19.5µs to acquireMachinesLock for "multinode-730000"
	I0318 04:59:57.607864   21321 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:59:57.607868   21321 fix.go:54] fixHost starting: 
	I0318 04:59:57.607985   21321 fix.go:112] recreateIfNeeded on multinode-730000: state=Stopped err=<nil>
	W0318 04:59:57.607993   21321 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:59:57.612383   21321 out.go:177] * Restarting existing qemu2 VM for "multinode-730000" ...
	I0318 04:59:57.620397   21321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2f:59:63:d3:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 04:59:57.622376   21321 main.go:141] libmachine: STDOUT: 
	I0318 04:59:57.622398   21321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:59:57.622427   21321 fix.go:56] duration metric: took 14.558292ms for fixHost
	I0318 04:59:57.622430   21321 start.go:83] releasing machines lock for "multinode-730000", held for 14.572459ms
	W0318 04:59:57.622438   21321 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:59:57.622472   21321 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:59:57.622477   21321 start.go:728] Will try again in 5 seconds ...
	I0318 05:00:02.623381   21321 start.go:360] acquireMachinesLock for multinode-730000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:00:02.623700   21321 start.go:364] duration metric: took 238.125µs to acquireMachinesLock for "multinode-730000"
	I0318 05:00:02.623808   21321 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:00:02.623832   21321 fix.go:54] fixHost starting: 
	I0318 05:00:02.624579   21321 fix.go:112] recreateIfNeeded on multinode-730000: state=Stopped err=<nil>
	W0318 05:00:02.624605   21321 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:00:02.635010   21321 out.go:177] * Restarting existing qemu2 VM for "multinode-730000" ...
	I0318 05:00:02.638121   21321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2f:59:63:d3:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/multinode-730000/disk.qcow2
	I0318 05:00:02.648259   21321 main.go:141] libmachine: STDOUT: 
	I0318 05:00:02.648338   21321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:00:02.648432   21321 fix.go:56] duration metric: took 24.600958ms for fixHost
	I0318 05:00:02.648452   21321 start.go:83] releasing machines lock for "multinode-730000", held for 24.729667ms
	W0318 05:00:02.648687   21321 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:00:02.656063   21321 out.go:177] 
	W0318 05:00:02.660021   21321 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:00:02.660054   21321 out.go:239] * 
	* 
	W0318 05:00:02.662566   21321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:00:02.672009   21321 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-730000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (69.69275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-730000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-730000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-730000-m01 --driver=qemu2 : exit status 80 (10.1225105s)

                                                
                                                
-- stdout --
	* [multinode-730000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-730000-m01" primary control-plane node in "multinode-730000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-730000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-730000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-730000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-730000-m02 --driver=qemu2 : exit status 80 (9.999627166s)

                                                
                                                
-- stdout --
	* [multinode-730000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-730000-m02" primary control-plane node in "multinode-730000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-730000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-730000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-730000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-730000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-730000: exit status 83 (82.991791ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-730000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-730000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-730000 -n multinode-730000: exit status 7 (33.114709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.38s)

                                                
                                    
x
+
TestPreload (9.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-075000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-075000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.809028166s)

                                                
                                                
-- stdout --
	* [test-preload-075000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-075000" primary control-plane node in "test-preload-075000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-075000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:00:23.318235   21387 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:00:23.318351   21387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:00:23.318354   21387 out.go:304] Setting ErrFile to fd 2...
	I0318 05:00:23.318356   21387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:00:23.318489   21387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:00:23.319547   21387 out.go:298] Setting JSON to false
	I0318 05:00:23.335565   21387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10796,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:00:23.335631   21387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:00:23.340860   21387 out.go:177] * [test-preload-075000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:00:23.354811   21387 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:00:23.349909   21387 notify.go:220] Checking for updates...
	I0318 05:00:23.361855   21387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:00:23.365798   21387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:00:23.369861   21387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:00:23.371291   21387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:00:23.374845   21387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:00:23.378304   21387 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:00:23.378358   21387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:00:23.382682   21387 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:00:23.389842   21387 start.go:297] selected driver: qemu2
	I0318 05:00:23.389849   21387 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:00:23.389859   21387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:00:23.392263   21387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:00:23.396735   21387 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:00:23.400024   21387 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:00:23.400065   21387 cni.go:84] Creating CNI manager for ""
	I0318 05:00:23.400075   21387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:00:23.400079   21387 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:00:23.400131   21387 start.go:340] cluster config:
	{Name:test-preload-075000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:00:23.405111   21387 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.412864   21387 out.go:177] * Starting "test-preload-075000" primary control-plane node in "test-preload-075000" cluster
	I0318 05:00:23.416836   21387 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0318 05:00:23.416912   21387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/test-preload-075000/config.json ...
	I0318 05:00:23.416931   21387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/test-preload-075000/config.json: {Name:mk2cb17f12ebde5483d4f37503e9f90f33d3d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:00:23.416954   21387 cache.go:107] acquiring lock: {Name:mk39bd09ca568613e74095f6d80a9acef2e49dbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.416959   21387 cache.go:107] acquiring lock: {Name:mk6b773e4c73780eb6f546b283d9b449f5a6c8f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.416996   21387 cache.go:107] acquiring lock: {Name:mk6052cb71b26c9656a414aa69be14286bd60847 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417200   21387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 05:00:23.417203   21387 start.go:360] acquireMachinesLock for test-preload-075000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:00:23.417213   21387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 05:00:23.417215   21387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:00:23.417200   21387 cache.go:107] acquiring lock: {Name:mk65182a0bcfe5daa7d6ff106a72cb45db4d0b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417229   21387 cache.go:107] acquiring lock: {Name:mke99976f3bda145d515cd90903cb513be5a2159 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417198   21387 cache.go:107] acquiring lock: {Name:mk9228fc838bfd111aca199d11ded93ca6cc0eed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417238   21387 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "test-preload-075000"
	I0318 05:00:23.417248   21387 cache.go:107] acquiring lock: {Name:mk8864406a328a9c6c15656264942f7d29a0677c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417298   21387 start.go:93] Provisioning new machine with config: &{Name:test-preload-075000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:00:23.417354   21387 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:00:23.421834   21387 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:00:23.417236   21387 cache.go:107] acquiring lock: {Name:mkc0c1ebb17f410b38c030ff50148a06c42b845a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:00:23.417414   21387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:00:23.417493   21387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 05:00:23.417530   21387 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 05:00:23.417935   21387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 05:00:23.422647   21387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:00:23.425322   21387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 05:00:23.427459   21387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:00:23.428202   21387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 05:00:23.428268   21387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:00:23.428297   21387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 05:00:23.429788   21387 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 05:00:23.429828   21387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:00:23.429843   21387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 05:00:23.439463   21387 start.go:159] libmachine.API.Create for "test-preload-075000" (driver="qemu2")
	I0318 05:00:23.439486   21387 client.go:168] LocalClient.Create starting
	I0318 05:00:23.439544   21387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:00:23.439571   21387 main.go:141] libmachine: Decoding PEM data...
	I0318 05:00:23.439580   21387 main.go:141] libmachine: Parsing certificate...
	I0318 05:00:23.439623   21387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:00:23.439645   21387 main.go:141] libmachine: Decoding PEM data...
	I0318 05:00:23.439650   21387 main.go:141] libmachine: Parsing certificate...
	I0318 05:00:23.439984   21387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:00:23.581792   21387 main.go:141] libmachine: Creating SSH key...
	I0318 05:00:23.682590   21387 main.go:141] libmachine: Creating Disk image...
	I0318 05:00:23.682610   21387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:00:23.682766   21387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:23.696051   21387 main.go:141] libmachine: STDOUT: 
	I0318 05:00:23.696075   21387 main.go:141] libmachine: STDERR: 
	I0318 05:00:23.696150   21387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2 +20000M
	I0318 05:00:23.708505   21387 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:00:23.708529   21387 main.go:141] libmachine: STDERR: 
	I0318 05:00:23.708541   21387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:23.708545   21387 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:00:23.708580   21387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:c3:a6:4f:cb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:23.710636   21387 main.go:141] libmachine: STDOUT: 
	I0318 05:00:23.710657   21387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:00:23.710675   21387 client.go:171] duration metric: took 271.193584ms to LocalClient.Create
	I0318 05:00:25.339221   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 05:00:25.468046   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0318 05:00:25.468105   21387 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.0510195s
	I0318 05:00:25.468153   21387 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0318 05:00:25.485421   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 05:00:25.495607   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0318 05:00:25.497474   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0318 05:00:25.503658   21387 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 05:00:25.503737   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 05:00:25.522437   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 05:00:25.563400   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 05:00:25.710802   21387 start.go:128] duration metric: took 2.293492333s to createHost
	I0318 05:00:25.710851   21387 start.go:83] releasing machines lock for "test-preload-075000", held for 2.293637291s
	W0318 05:00:25.710918   21387 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:00:25.727976   21387 out.go:177] * Deleting "test-preload-075000" in qemu2 ...
	W0318 05:00:25.758914   21387 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:00:25.758954   21387 start.go:728] Will try again in 5 seconds ...
	W0318 05:00:25.945295   21387 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 05:00:25.945389   21387 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 05:00:26.140486   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0318 05:00:26.140531   21387 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.723436334s
	I0318 05:00:26.140555   21387 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0318 05:00:27.491660   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0318 05:00:27.491715   21387 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.07486725s
	I0318 05:00:27.491741   21387 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0318 05:00:27.628151   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 05:00:27.628223   21387 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.21140575s
	I0318 05:00:27.628252   21387 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 05:00:28.446156   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0318 05:00:28.446221   21387 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.029423833s
	I0318 05:00:28.446250   21387 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0318 05:00:29.736784   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0318 05:00:29.736837   21387 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.319782209s
	I0318 05:00:29.736864   21387 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0318 05:00:30.759237   21387 start.go:360] acquireMachinesLock for test-preload-075000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:00:30.759648   21387 start.go:364] duration metric: took 334µs to acquireMachinesLock for "test-preload-075000"
	I0318 05:00:30.759789   21387 start.go:93] Provisioning new machine with config: &{Name:test-preload-075000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-075000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:00:30.760038   21387 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:00:30.769041   21387 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:00:30.819213   21387 start.go:159] libmachine.API.Create for "test-preload-075000" (driver="qemu2")
	I0318 05:00:30.819269   21387 client.go:168] LocalClient.Create starting
	I0318 05:00:30.819371   21387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:00:30.819434   21387 main.go:141] libmachine: Decoding PEM data...
	I0318 05:00:30.819448   21387 main.go:141] libmachine: Parsing certificate...
	I0318 05:00:30.819514   21387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:00:30.819593   21387 main.go:141] libmachine: Decoding PEM data...
	I0318 05:00:30.819605   21387 main.go:141] libmachine: Parsing certificate...
	I0318 05:00:30.820127   21387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:00:30.969174   21387 main.go:141] libmachine: Creating SSH key...
	I0318 05:00:31.016844   21387 main.go:141] libmachine: Creating Disk image...
	I0318 05:00:31.016851   21387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:00:31.017034   21387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:31.029676   21387 main.go:141] libmachine: STDOUT: 
	I0318 05:00:31.029709   21387 main.go:141] libmachine: STDERR: 
	I0318 05:00:31.029766   21387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2 +20000M
	I0318 05:00:31.040840   21387 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:00:31.040857   21387 main.go:141] libmachine: STDERR: 
	I0318 05:00:31.040868   21387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:31.040872   21387 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:00:31.040915   21387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:af:12:ed:73:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/test-preload-075000/disk.qcow2
	I0318 05:00:31.042711   21387 main.go:141] libmachine: STDOUT: 
	I0318 05:00:31.042728   21387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:00:31.042741   21387 client.go:171] duration metric: took 223.471875ms to LocalClient.Create
	I0318 05:00:31.890629   21387 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0318 05:00:31.890693   21387 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 8.4738005s
	I0318 05:00:31.890720   21387 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0318 05:00:33.043626   21387 start.go:128] duration metric: took 2.283601583s to createHost
	I0318 05:00:33.043717   21387 start.go:83] releasing machines lock for "test-preload-075000", held for 2.284116083s
	W0318 05:00:33.043975   21387 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-075000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:00:33.058821   21387 out.go:177] 
	W0318 05:00:33.062734   21387 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:00:33.062769   21387 out.go:239] * 
	* 
	W0318 05:00:33.065345   21387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:00:33.077753   21387 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-075000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-18 05:00:33.099755 -0700 PDT m=+752.096209626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-075000 -n test-preload-075000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-075000 -n test-preload-075000: exit status 7 (67.452041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-075000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-075000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-075000
--- FAIL: TestPreload (9.98s)

                                                
                                    
x
+
TestScheduledStopUnix (10.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-710000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-710000 --memory=2048 --driver=qemu2 : exit status 80 (9.967239917s)

                                                
                                                
-- stdout --
	* [scheduled-stop-710000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-710000" primary control-plane node in "scheduled-stop-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-710000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-710000" primary control-plane node in "scheduled-stop-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-18 05:00:43.237548 -0700 PDT m=+762.234324918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-710000 -n scheduled-stop-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-710000 -n scheduled-stop-710000: exit status 7 (68.629917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-710000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-710000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-710000
--- FAIL: TestScheduledStopUnix (10.14s)

                                                
                                    
x
+
TestSkaffold (16.81s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1128629961 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-521000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-521000 --memory=2600 --driver=qemu2 : exit status 80 (9.8638165s)

                                                
                                                
-- stdout --
	* [skaffold-521000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-521000" primary control-plane node in "skaffold-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-521000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-521000" primary control-plane node in "skaffold-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-18 05:01:00.053189 -0700 PDT m=+779.050500001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-521000 -n skaffold-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-521000 -n skaffold-521000: exit status 7 (61.582417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-521000
--- FAIL: TestSkaffold (16.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (662.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3581450319 start -p running-upgrade-349000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3581450319 start -p running-upgrade-349000 --memory=2200 --vm-driver=qemu2 : (1m31.886120042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m52.191975417s)

                                                
                                                
-- stdout --
	* [running-upgrade-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-349000" primary control-plane node in "running-upgrade-349000" cluster
	* Updating the running qemu2 "running-upgrade-349000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:02:57.409453   21725 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:02:57.409575   21725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:57.409578   21725 out.go:304] Setting ErrFile to fd 2...
	I0318 05:02:57.409581   21725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:57.409731   21725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:02:57.410793   21725 out.go:298] Setting JSON to false
	I0318 05:02:57.427580   21725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10950,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:02:57.427649   21725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:02:57.432707   21725 out.go:177] * [running-upgrade-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:02:57.439632   21725 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:02:57.443731   21725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:02:57.439730   21725 notify.go:220] Checking for updates...
	I0318 05:02:57.451606   21725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:02:57.454697   21725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:02:57.457682   21725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:02:57.460664   21725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:02:57.463945   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:02:57.467664   21725 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 05:02:57.470655   21725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:02:57.474650   21725 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:02:57.481631   21725 start.go:297] selected driver: qemu2
	I0318 05:02:57.481636   21725 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:57.481679   21725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:02:57.483829   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:02:57.483845   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:02:57.483862   21725 start.go:340] cluster config:
	{Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:57.483906   21725 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:02:57.489587   21725 out.go:177] * Starting "running-upgrade-349000" primary control-plane node in "running-upgrade-349000" cluster
	I0318 05:02:57.493675   21725 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:02:57.493686   21725 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 05:02:57.493693   21725 cache.go:56] Caching tarball of preloaded images
	I0318 05:02:57.493737   21725 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:02:57.493742   21725 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 05:02:57.493785   21725 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/config.json ...
	I0318 05:02:57.494105   21725 start.go:360] acquireMachinesLock for running-upgrade-349000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:03:08.492169   21725 start.go:364] duration metric: took 10.998404708s to acquireMachinesLock for "running-upgrade-349000"
	I0318 05:03:08.492191   21725 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:03:08.492196   21725 fix.go:54] fixHost starting: 
	I0318 05:03:08.493017   21725 fix.go:112] recreateIfNeeded on running-upgrade-349000: state=Running err=<nil>
	W0318 05:03:08.493025   21725 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:03:08.501231   21725 out.go:177] * Updating the running qemu2 "running-upgrade-349000" VM ...
	I0318 05:03:08.505092   21725 machine.go:94] provisionDockerMachine start ...
	I0318 05:03:08.505158   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.505297   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.505307   21725 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 05:03:08.578868   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-349000
	
	I0318 05:03:08.578884   21725 buildroot.go:166] provisioning hostname "running-upgrade-349000"
	I0318 05:03:08.578923   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.579035   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.579041   21725 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-349000 && echo "running-upgrade-349000" | sudo tee /etc/hostname
	I0318 05:03:08.658302   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-349000
	
	I0318 05:03:08.658342   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.658456   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.658467   21725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-349000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-349000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-349000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 05:03:08.732206   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:08.732221   21725 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18427-19517/.minikube CaCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18427-19517/.minikube}
	I0318 05:03:08.732230   21725 buildroot.go:174] setting up certificates
	I0318 05:03:08.732240   21725 provision.go:84] configureAuth start
	I0318 05:03:08.732245   21725 provision.go:143] copyHostCerts
	I0318 05:03:08.732311   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem, removing ...
	I0318 05:03:08.732320   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem
	I0318 05:03:08.732425   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem (1078 bytes)
	I0318 05:03:08.732592   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem, removing ...
	I0318 05:03:08.732596   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem
	I0318 05:03:08.732637   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem (1123 bytes)
	I0318 05:03:08.732735   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem, removing ...
	I0318 05:03:08.732738   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem
	I0318 05:03:08.732771   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem (1679 bytes)
	I0318 05:03:08.732855   21725 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-349000 san=[127.0.0.1 localhost minikube running-upgrade-349000]
	I0318 05:03:08.883831   21725 provision.go:177] copyRemoteCerts
	I0318 05:03:08.883873   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 05:03:08.883883   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:08.925627   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 05:03:08.933181   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 05:03:08.940345   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 05:03:08.947070   21725 provision.go:87] duration metric: took 214.827625ms to configureAuth
	I0318 05:03:08.947084   21725 buildroot.go:189] setting minikube options for container-runtime
	I0318 05:03:08.947186   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:03:08.947231   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.947319   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.947324   21725 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 05:03:09.021434   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 05:03:09.021444   21725 buildroot.go:70] root file system type: tmpfs
	I0318 05:03:09.021497   21725 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 05:03:09.021547   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.021657   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.021690   21725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 05:03:09.098747   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 05:03:09.098813   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.098933   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.098941   21725 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 05:03:09.174095   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:09.174107   21725 machine.go:97] duration metric: took 669.030583ms to provisionDockerMachine
	I0318 05:03:09.174112   21725 start.go:293] postStartSetup for "running-upgrade-349000" (driver="qemu2")
	I0318 05:03:09.174119   21725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 05:03:09.174176   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 05:03:09.174184   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:09.212565   21725 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 05:03:09.213995   21725 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 05:03:09.214003   21725 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/addons for local assets ...
	I0318 05:03:09.214066   21725 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/files for local assets ...
	I0318 05:03:09.214154   21725 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem -> 199262.pem in /etc/ssl/certs
	I0318 05:03:09.214242   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 05:03:09.216814   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:09.223780   21725 start.go:296] duration metric: took 49.663167ms for postStartSetup
	I0318 05:03:09.223795   21725 fix.go:56] duration metric: took 731.624292ms for fixHost
	I0318 05:03:09.223830   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.223937   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.223942   21725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 05:03:09.297178   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710763389.588434346
	
	I0318 05:03:09.297187   21725 fix.go:216] guest clock: 1710763389.588434346
	I0318 05:03:09.297191   21725 fix.go:229] Guest: 2024-03-18 05:03:09.588434346 -0700 PDT Remote: 2024-03-18 05:03:09.223797 -0700 PDT m=+11.838400876 (delta=364.637346ms)
	I0318 05:03:09.297203   21725 fix.go:200] guest clock delta is within tolerance: 364.637346ms
	I0318 05:03:09.297209   21725 start.go:83] releasing machines lock for "running-upgrade-349000", held for 805.05375ms
	I0318 05:03:09.297284   21725 ssh_runner.go:195] Run: cat /version.json
	I0318 05:03:09.297294   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:09.297284   21725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 05:03:09.297329   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	W0318 05:03:09.440634   21725 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 05:03:09.440715   21725 ssh_runner.go:195] Run: systemctl --version
	I0318 05:03:09.442442   21725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 05:03:09.444086   21725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 05:03:09.444111   21725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 05:03:09.447486   21725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 05:03:09.452081   21725 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 05:03:09.452089   21725 start.go:494] detecting cgroup driver to use...
	I0318 05:03:09.452165   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:09.457587   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 05:03:09.460375   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 05:03:09.463829   21725 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 05:03:09.463857   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 05:03:09.467403   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:09.470786   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 05:03:09.473584   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:09.476542   21725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 05:03:09.480151   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 05:03:09.483145   21725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 05:03:09.486138   21725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 05:03:09.488939   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.592585   21725 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 05:03:09.600847   21725 start.go:494] detecting cgroup driver to use...
	I0318 05:03:09.600922   21725 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 05:03:09.606873   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:09.611318   21725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 05:03:09.617777   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:09.622736   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:09.627494   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:09.632872   21725 ssh_runner.go:195] Run: which cri-dockerd
	I0318 05:03:09.634132   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 05:03:09.636717   21725 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 05:03:09.641646   21725 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 05:03:09.749315   21725 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 05:03:09.850915   21725 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 05:03:09.850974   21725 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 05:03:09.856344   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.957489   21725 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:26.698296   21725 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.741323292s)
	I0318 05:03:26.698377   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 05:03:26.702848   21725 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 05:03:26.711437   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:26.716046   21725 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 05:03:26.794939   21725 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 05:03:26.888720   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:26.970123   21725 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 05:03:26.975785   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:26.980589   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:27.070918   21725 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 05:03:27.109355   21725 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 05:03:27.109429   21725 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 05:03:27.111541   21725 start.go:562] Will wait 60s for crictl version
	I0318 05:03:27.111600   21725 ssh_runner.go:195] Run: which crictl
	I0318 05:03:27.113329   21725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 05:03:27.125164   21725 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 05:03:27.125231   21725 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:27.137572   21725 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:27.153006   21725 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 05:03:27.153078   21725 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 05:03:27.154314   21725 kubeadm.go:877] updating cluster {Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 05:03:27.154375   21725 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:03:27.154418   21725 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:27.166407   21725 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:27.166417   21725 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:27.166463   21725 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:27.169387   21725 ssh_runner.go:195] Run: which lz4
	I0318 05:03:27.170691   21725 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 05:03:27.171941   21725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 05:03:27.171950   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 05:03:27.861365   21725 docker.go:649] duration metric: took 690.727334ms to copy over tarball
	I0318 05:03:27.861413   21725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 05:03:28.974559   21725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.113167792s)
	I0318 05:03:28.974574   21725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 05:03:28.990234   21725 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:28.993241   21725 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 05:03:28.998008   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:29.082972   21725 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:30.771498   21725 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.688563792s)
	I0318 05:03:30.771592   21725 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:30.787188   21725 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:30.787198   21725 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:30.787203   21725 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 05:03:30.793794   21725 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:30.793816   21725 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 05:03:30.793952   21725 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:30.793991   21725 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:30.794026   21725 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:30.794072   21725 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:30.794111   21725 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:30.794183   21725 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:30.802876   21725 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:30.803015   21725 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:30.803046   21725 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:30.803270   21725 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 05:03:30.803273   21725 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:30.803295   21725 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:30.804371   21725 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:30.804898   21725 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0318 05:03:32.694010   21725 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:32.694261   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.714659   21725 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 05:03:32.714692   21725 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.714765   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.729031   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 05:03:32.729156   21725 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:32.731054   21725 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 05:03:32.731073   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 05:03:32.753347   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.772143   21725 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:32.772156   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 05:03:32.780555   21725 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 05:03:32.780577   21725 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.780638   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.783451   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.812948   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.831922   21725 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 05:03:32.831955   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 05:03:32.832003   21725 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 05:03:32.832025   21725 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 05:03:32.832053   21725 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.832025   21725 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.832098   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.832099   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.834710   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 05:03:32.840881   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.843885   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.845521   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 05:03:32.845927   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 05:03:32.857103   21725 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 05:03:32.857125   21725 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 05:03:32.857180   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 05:03:32.861184   21725 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 05:03:32.861205   21725 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.861260   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.867634   21725 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 05:03:32.867654   21725 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.867716   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.877482   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 05:03:32.877592   21725 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 05:03:32.885326   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 05:03:32.886901   21725 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 05:03:32.886919   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 05:03:32.887028   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 05:03:32.894476   21725 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 05:03:32.894485   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 05:03:32.922780   21725 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 05:03:33.428454   21725 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:33.429111   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.469431   21725 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 05:03:33.469473   21725 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.469582   21725 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.490174   21725 cache_images.go:92] duration metric: took 2.70304225s to LoadCachedImages
	W0318 05:03:33.490230   21725 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 05:03:33.490239   21725 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 05:03:33.490312   21725 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-349000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 05:03:33.490393   21725 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 05:03:33.507945   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:03:33.507957   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:03:33.507962   21725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 05:03:33.507971   21725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-349000 NodeName:running-upgrade-349000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 05:03:33.508053   21725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-349000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 05:03:33.508112   21725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 05:03:33.511876   21725 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 05:03:33.511904   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 05:03:33.515031   21725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 05:03:33.520432   21725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 05:03:33.525404   21725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 05:03:33.530683   21725 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 05:03:33.531924   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:33.626822   21725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:03:33.631784   21725 certs.go:68] Setting up /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000 for IP: 10.0.2.15
	I0318 05:03:33.631793   21725 certs.go:194] generating shared ca certs ...
	I0318 05:03:33.631801   21725 certs.go:226] acquiring lock for ca certs: {Name:mk67337f74312fe6750257c43ce98e6fa0b5d738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.631935   21725 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key
	I0318 05:03:33.631970   21725 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key
	I0318 05:03:33.631976   21725 certs.go:256] generating profile certs ...
	I0318 05:03:33.632038   21725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key
	I0318 05:03:33.632054   21725 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0
	I0318 05:03:33.632065   21725 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 05:03:33.711684   21725 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 ...
	I0318 05:03:33.711696   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0: {Name:mk407906b5df038122ffa715219255414a809a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.711969   21725 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0 ...
	I0318 05:03:33.711974   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0: {Name:mkb2110062d9ecb95c1e2a8df75a80d9cd55ba13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.712097   21725 certs.go:381] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt
	I0318 05:03:33.713090   21725 certs.go:385] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key
	I0318 05:03:33.713256   21725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.key
	I0318 05:03:33.713373   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem (1338 bytes)
	W0318 05:03:33.713397   21725 certs.go:480] ignoring /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926_empty.pem, impossibly tiny 0 bytes
	I0318 05:03:33.713401   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 05:03:33.713419   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem (1078 bytes)
	I0318 05:03:33.713437   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem (1123 bytes)
	I0318 05:03:33.713452   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem (1679 bytes)
	I0318 05:03:33.713491   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:33.713834   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 05:03:33.721684   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 05:03:33.728809   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 05:03:33.735657   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 05:03:33.743454   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 05:03:33.750014   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 05:03:33.756880   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 05:03:33.763923   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 05:03:33.771379   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 05:03:33.778494   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem --> /usr/share/ca-certificates/19926.pem (1338 bytes)
	I0318 05:03:33.785284   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /usr/share/ca-certificates/199262.pem (1708 bytes)
	I0318 05:03:33.792199   21725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 05:03:33.797354   21725 ssh_runner.go:195] Run: openssl version
	I0318 05:03:33.798988   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19926.pem && ln -fs /usr/share/ca-certificates/19926.pem /etc/ssl/certs/19926.pem"
	I0318 05:03:33.802024   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.803335   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:50 /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.803352   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.805210   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19926.pem /etc/ssl/certs/51391683.0"
	I0318 05:03:33.808133   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199262.pem && ln -fs /usr/share/ca-certificates/199262.pem /etc/ssl/certs/199262.pem"
	I0318 05:03:33.811347   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.812675   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:50 /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.812695   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.814623   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199262.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 05:03:33.817308   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 05:03:33.820381   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.821770   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:02 /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.821791   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.823504   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 05:03:33.826566   21725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 05:03:33.828009   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 05:03:33.829899   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 05:03:33.831560   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 05:03:33.833443   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 05:03:33.835687   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 05:03:33.838681   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 05:03:33.840439   21725 kubeadm.go:391] StartCluster: {Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:03:33.840508   21725 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:33.850972   21725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 05:03:33.854224   21725 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 05:03:33.854230   21725 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 05:03:33.854233   21725 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 05:03:33.854253   21725 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 05:03:33.857163   21725 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.857432   21725 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-349000" does not appear in /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:03:33.857547   21725 kubeconfig.go:62] /Users/jenkins/minikube-integration/18427-19517/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-349000" cluster setting kubeconfig missing "running-upgrade-349000" context setting]
	I0318 05:03:33.857734   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.858130   21725 kapi.go:59] client config for running-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10578ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:03:33.858452   21725 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 05:03:33.861111   21725 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-349000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 05:03:33.861117   21725 kubeadm.go:1154] stopping kube-system containers ...
	I0318 05:03:33.861161   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:33.873391   21725 docker.go:483] Stopping containers: [c92808339edd d988b026b77e d4f26039d08f a4880ca05709 82437f53be1f 1cf5bd1f2f5d 39606e718772 3d1d66d16a8e fb7044aa6fe8 eab46fcf2c4f 08607bd13bb5 0f0ff398976b 979957847e88 09fd4ef3cc7e 525748e95af3 5b5f45df096f 81416833671d b534994d7aae 4dfc21fbd434 b60836a37ed6 0d3907cde91d 99caf181965e]
	I0318 05:03:33.873461   21725 ssh_runner.go:195] Run: docker stop c92808339edd d988b026b77e d4f26039d08f a4880ca05709 82437f53be1f 1cf5bd1f2f5d 39606e718772 3d1d66d16a8e fb7044aa6fe8 eab46fcf2c4f 08607bd13bb5 0f0ff398976b 979957847e88 09fd4ef3cc7e 525748e95af3 5b5f45df096f 81416833671d b534994d7aae 4dfc21fbd434 b60836a37ed6 0d3907cde91d 99caf181965e
	I0318 05:03:33.885464   21725 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 05:03:33.981587   21725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:03:33.984776   21725 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 18 12:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 18 12:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 18 12:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 18 12:02 /etc/kubernetes/scheduler.conf
	
	I0318 05:03:33.984819   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf
	I0318 05:03:33.987883   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.987909   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:03:33.991165   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf
	I0318 05:03:33.994392   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.994416   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:03:33.997009   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf
	I0318 05:03:33.999889   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.999915   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:03:34.002985   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf
	I0318 05:03:34.005660   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:34.005681   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:03:34.008311   21725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:03:34.011538   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.046328   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.397306   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.621820   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.656125   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.680426   21725 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:03:34.680509   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:35.180810   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:35.682541   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:36.182534   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:36.682479   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:37.182514   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:37.186733   21725 api_server.go:72] duration metric: took 2.506387083s to wait for apiserver process to appear ...
	I0318 05:03:37.186742   21725 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:03:37.186758   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:42.187752   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:42.187777   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:47.188575   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:47.188642   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:52.189107   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:52.189176   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:57.189750   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:57.189795   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:02.190334   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:02.190409   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:07.191196   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:07.191253   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:12.192325   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:12.192350   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:17.193598   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:17.193629   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:22.195249   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:22.195271   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:27.197346   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:27.197397   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:32.197668   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:32.197704   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:37.199778   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:37.200034   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:37.223720   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:37.223826   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:37.239468   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:37.239538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:37.252711   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:37.252789   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:37.264258   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:37.264331   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:37.284534   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:37.284610   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:37.295475   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:37.295538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:37.306677   21725 logs.go:276] 0 containers: []
	W0318 05:04:37.306687   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:37.306750   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:37.317306   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:37.317321   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:37.317327   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:37.356395   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:37.356407   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:37.360600   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:37.360608   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:37.374626   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:37.374636   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:37.386800   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:37.386811   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:37.411097   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:37.411108   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:37.422853   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:37.422865   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:37.450291   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:37.450300   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:37.462522   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:37.462532   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:37.552237   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:37.552249   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:37.591827   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:37.591838   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:37.606180   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:37.606194   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:37.620474   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:37.620489   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:37.632091   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:37.632102   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:37.647053   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:37.647067   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:37.658875   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:37.658888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:37.676007   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:37.676018   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:37.687402   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:37.687414   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:37.699507   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:37.699517   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:40.214574   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:45.216790   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:45.217015   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:45.243744   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:45.243856   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:45.258990   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:45.259062   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:45.271512   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:45.271583   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:45.282264   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:45.282329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:45.293042   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:45.293113   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:45.304657   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:45.304732   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:45.340618   21725 logs.go:276] 0 containers: []
	W0318 05:04:45.340635   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:45.340725   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:45.356451   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:45.356469   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:45.356475   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:45.376724   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:45.376735   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:45.388001   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:45.388015   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:45.404338   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:45.404348   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:45.417037   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:45.417049   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:45.428331   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:45.428342   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:45.440327   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:45.440339   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:45.457384   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:45.457396   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:45.468991   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:45.469004   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:45.473857   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:45.473865   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:45.510629   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:45.510642   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:45.527650   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:45.527660   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:45.538571   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:45.538582   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:45.577956   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:45.577968   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:45.592164   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:45.592174   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:45.609110   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:45.609121   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:45.626435   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:45.626445   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:45.666955   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:45.666963   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:45.680577   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:45.680588   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:48.207819   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:53.209969   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:53.210126   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:53.223885   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:53.223971   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:53.235870   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:53.235947   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:53.247248   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:53.247322   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:53.257723   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:53.257784   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:53.268158   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:53.268218   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:53.283618   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:53.283686   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:53.294339   21725 logs.go:276] 0 containers: []
	W0318 05:04:53.294348   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:53.294399   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:53.305093   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:53.305112   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:53.305118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:53.317347   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:53.317360   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:53.333351   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:53.333361   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:53.360152   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:53.360159   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:53.399141   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:53.399155   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:53.436124   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:53.436139   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:53.450641   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:53.450652   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:53.461459   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:53.461471   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:53.473114   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:53.473123   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:53.488734   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:53.488747   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:53.504876   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:53.504888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:53.522874   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:53.522887   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:53.534785   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:53.534801   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:53.549338   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:53.549352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:53.560915   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:53.560926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:53.565742   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:53.565748   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:53.602317   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:53.602331   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:53.615999   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:53.616011   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:53.628147   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:53.628159   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:56.143624   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:01.145736   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:01.145857   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:01.157833   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:01.157917   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:01.168951   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:01.169018   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:01.179324   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:01.179399   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:01.194869   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:01.194935   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:01.208042   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:01.208119   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:01.218564   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:01.218633   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:01.228364   21725 logs.go:276] 0 containers: []
	W0318 05:05:01.228375   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:01.228438   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:01.238431   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:01.238448   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:01.238453   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:01.252336   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:01.252348   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:01.266205   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:01.266214   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:01.277652   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:01.277666   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:01.289269   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:01.289282   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:01.306587   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:01.306601   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:01.317927   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:01.317939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:01.332133   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:01.332143   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:01.345689   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:01.345699   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:01.373284   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:01.373292   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:01.413445   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:01.413454   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:01.450127   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:01.450137   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:01.466344   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:01.466358   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:01.477908   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:01.477920   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:01.482883   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:01.482897   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:01.531110   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:01.531124   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:01.542986   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:01.543011   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:01.559880   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:01.559891   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:01.572014   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:01.572027   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:04.084751   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:09.087011   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:09.087299   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:09.116590   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:09.116662   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:09.130138   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:09.130202   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:09.141637   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:09.141700   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:09.152919   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:09.152983   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:09.163806   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:09.163865   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:09.174249   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:09.174314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:09.184856   21725 logs.go:276] 0 containers: []
	W0318 05:05:09.184866   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:09.184915   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:09.195638   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:09.195656   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:09.195662   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:09.232925   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:09.232936   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:09.251336   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:09.251347   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:09.263500   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:09.263511   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:09.274370   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:09.274384   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:09.286105   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:09.286118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:09.302413   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:09.302423   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:09.341007   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:09.341014   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:09.345584   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:09.345591   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:09.359359   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:09.359369   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:09.373692   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:09.373702   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:09.384795   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:09.384809   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:09.397757   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:09.397768   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:09.411665   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:09.411675   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:09.438288   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:09.438300   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:09.450642   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:09.450652   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:09.485869   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:09.485883   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:09.499990   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:09.500001   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:09.512058   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:09.512068   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:12.026314   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:17.029036   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:17.029478   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:17.070683   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:17.070827   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:17.095364   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:17.095481   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:17.111340   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:17.111437   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:17.123253   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:17.123329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:17.133942   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:17.134019   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:17.146260   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:17.146337   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:17.157161   21725 logs.go:276] 0 containers: []
	W0318 05:05:17.157172   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:17.157234   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:17.167762   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:17.167776   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:17.167785   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:17.179562   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:17.179573   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:17.215539   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:17.215550   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:17.227191   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:17.227203   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:17.243266   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:17.243276   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:17.261162   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:17.261177   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:17.277764   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:17.277777   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:17.291272   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:17.291283   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:17.302897   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:17.302908   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:17.314199   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:17.314213   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:17.329042   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:17.329053   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:17.343617   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:17.343628   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:17.369085   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:17.369098   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:17.383262   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:17.383274   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:17.397338   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:17.397352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:17.411757   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:17.411767   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:17.453144   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:17.453153   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:17.457481   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:17.457488   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:17.495355   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:17.495367   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:20.008227   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:25.010952   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:25.011436   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:25.049005   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:25.049139   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:25.069016   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:25.069120   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:25.083846   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:25.083927   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:25.098952   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:25.099030   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:25.110429   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:25.110501   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:25.123085   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:25.123160   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:25.133467   21725 logs.go:276] 0 containers: []
	W0318 05:05:25.133496   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:25.133557   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:25.145080   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:25.145097   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:25.145102   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:25.156935   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:25.156949   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:25.170627   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:25.170639   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:25.175305   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:25.175316   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:25.190315   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:25.190330   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:25.202320   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:25.202331   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:25.220672   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:25.220686   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:25.234314   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:25.234325   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:25.274088   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:25.274096   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:25.308087   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:25.308099   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:25.344912   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:25.344923   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:25.358993   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:25.359004   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:25.369980   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:25.369996   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:25.384029   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:25.384041   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:25.395418   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:25.395427   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:25.411823   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:25.411842   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:25.423908   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:25.423920   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:25.435072   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:25.435083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:25.460867   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:25.460876   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:27.974883   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:32.977589   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:32.977959   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:33.006536   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:33.006668   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:33.025308   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:33.025398   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:33.039491   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:33.039575   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:33.050743   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:33.050807   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:33.061124   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:33.061198   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:33.071666   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:33.071740   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:33.081765   21725 logs.go:276] 0 containers: []
	W0318 05:05:33.081780   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:33.081836   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:33.097208   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:33.097236   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:33.097242   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:33.136189   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:33.136197   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:33.149928   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:33.149940   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:33.164790   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:33.164801   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:33.176131   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:33.176144   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:33.213662   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:33.213673   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:33.225500   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:33.225514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:33.238640   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:33.238650   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:33.251295   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:33.251308   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:33.256123   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:33.256130   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:33.293025   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:33.293036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:33.304687   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:33.304698   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:33.316480   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:33.316491   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:33.328727   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:33.328738   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:33.346422   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:33.346434   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:33.359078   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:33.359090   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:33.375262   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:33.375275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:33.386505   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:33.386519   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:33.403164   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:33.403175   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:35.930434   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:40.932468   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:40.932670   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:40.948182   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:40.948271   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:40.961320   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:40.961397   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:40.972001   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:40.972076   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:40.986549   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:40.986626   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:40.997938   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:40.998008   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:41.008830   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:41.008898   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:41.019948   21725 logs.go:276] 0 containers: []
	W0318 05:05:41.019960   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:41.020026   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:41.030558   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:41.030572   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:41.030577   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:41.042197   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:41.042209   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:41.067712   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:41.067721   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:41.081130   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:41.081142   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:41.094503   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:41.094514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:41.106690   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:41.106704   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:41.119662   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:41.119673   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:41.136566   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:41.136581   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:41.151757   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:41.151769   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:41.169768   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:41.169779   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:41.185491   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:41.185502   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:41.196823   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:41.196835   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:41.213483   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:41.213495   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:41.254134   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:41.254147   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:41.259119   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:41.259132   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:41.300959   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:41.300969   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:41.337169   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:41.337181   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:41.348500   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:41.348513   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:41.366933   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:41.366944   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:43.880907   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:48.883328   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:48.883547   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:48.901745   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:48.901842   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:48.914831   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:48.914910   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:48.931015   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:48.931087   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:48.941844   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:48.941913   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:48.952654   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:48.952719   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:48.963772   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:48.963846   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:48.973888   21725 logs.go:276] 0 containers: []
	W0318 05:05:48.973904   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:48.973969   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:48.988274   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:48.988295   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:48.988302   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:49.001701   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:49.001715   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:49.041139   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:49.041150   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:49.052345   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:49.052356   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:49.064008   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:49.064019   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:49.075321   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:49.075332   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:49.089554   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:49.089564   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:49.101039   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:49.101051   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:49.118359   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:49.118370   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:49.136916   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:49.136927   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:49.176926   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:49.176935   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:49.181657   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:49.181667   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:49.200334   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:49.200343   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:49.214369   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:49.214380   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:49.227664   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:49.227676   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:49.264421   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:49.264434   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:49.276781   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:49.276792   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:49.296978   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:49.296993   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:49.321767   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:49.321774   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:51.834837   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:56.837174   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:56.837494   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:56.868367   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:56.868491   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:56.884892   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:56.884974   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:56.898590   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:56.898663   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:56.909751   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:56.909816   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:56.920665   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:56.920730   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:56.931224   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:56.931297   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:56.941304   21725 logs.go:276] 0 containers: []
	W0318 05:05:56.941321   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:56.941373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:56.951817   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:56.951834   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:56.951839   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:56.962556   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:56.962569   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:56.976290   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:56.976302   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:56.989359   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:56.989370   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:57.014156   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:57.014165   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:57.025480   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:57.025494   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:57.059637   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:57.059648   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:57.073886   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:57.073895   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:57.085686   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:57.085699   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:57.104472   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:57.104484   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:57.116136   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:57.116149   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:57.155392   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:57.155401   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:57.166966   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:57.166977   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:57.185533   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:57.185545   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:57.202093   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:57.202106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:57.206716   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:57.206725   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:57.220868   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:57.220879   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:57.258600   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:57.258611   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:57.276887   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:57.276898   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:59.790074   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:04.792542   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:04.792726   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:04.809873   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:04.809960   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:04.820731   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:04.820810   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:04.833939   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:04.834024   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:04.844324   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:04.844392   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:04.855188   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:04.855260   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:04.865967   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:04.866037   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:04.875719   21725 logs.go:276] 0 containers: []
	W0318 05:06:04.875729   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:04.875779   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:04.886372   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:04.886390   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:04.886397   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:04.890927   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:04.890934   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:04.904833   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:04.904844   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:04.919720   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:04.919731   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:04.931804   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:04.931818   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:04.946880   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:04.946893   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:04.958412   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:04.958422   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:04.970499   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:04.970510   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:05.008700   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:05.008714   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:05.022001   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:05.022014   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:05.038454   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:05.038465   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:05.050104   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:05.050114   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:05.074852   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:05.074859   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:05.088610   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:05.088623   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:05.127149   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:05.127160   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:05.165494   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:05.165504   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:05.179162   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:05.179173   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:05.194296   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:05.194308   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:05.207018   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:05.207029   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:07.727257   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:12.729401   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:12.729563   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:12.740807   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:12.740884   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:12.751544   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:12.751618   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:12.762188   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:12.762257   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:12.772532   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:12.772596   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:12.783350   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:12.783422   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:12.793806   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:12.793883   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:12.803801   21725 logs.go:276] 0 containers: []
	W0318 05:06:12.803814   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:12.803880   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:12.814278   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:12.814291   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:12.814297   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:12.825650   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:12.825661   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:12.836621   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:12.836632   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:12.849046   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:12.849058   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:12.853645   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:12.853654   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:12.890516   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:12.890527   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:12.904864   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:12.904874   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:12.916361   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:12.916373   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:12.927904   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:12.927915   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:12.941549   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:12.941561   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:12.953093   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:12.953105   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:12.970363   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:12.970374   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:12.983973   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:12.983988   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:12.996121   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:12.996130   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:13.019347   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:13.019356   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:13.057204   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:13.057213   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:13.093507   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:13.093520   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:13.107085   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:13.107097   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:13.123472   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:13.123484   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:15.636976   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:20.639402   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:20.639624   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:20.672198   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:20.672290   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:20.686793   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:20.686859   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:20.698397   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:20.698470   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:20.708911   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:20.708991   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:20.722275   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:20.722342   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:20.734619   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:20.734692   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:20.745351   21725 logs.go:276] 0 containers: []
	W0318 05:06:20.745364   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:20.745427   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:20.759508   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:20.759525   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:20.759530   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:20.764111   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:20.764119   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:20.780322   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:20.780335   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:20.818753   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:20.818763   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:20.829952   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:20.829964   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:20.846574   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:20.846585   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:20.859789   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:20.859799   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:20.871356   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:20.871369   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:20.895771   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:20.895778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:20.909450   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:20.909460   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:20.947227   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:20.947239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:20.961177   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:20.961188   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:20.975789   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:20.975800   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:20.987162   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:20.987175   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:20.998699   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:20.998710   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:21.035295   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:21.035308   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:21.049763   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:21.049776   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:21.061999   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:21.062010   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:21.073320   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:21.073332   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:23.588142   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:28.590446   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:28.590747   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:28.612231   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:28.612333   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:28.628239   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:28.628318   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:28.640554   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:28.640629   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:28.651634   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:28.651701   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:28.663158   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:28.663234   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:28.678875   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:28.678947   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:28.689378   21725 logs.go:276] 0 containers: []
	W0318 05:06:28.689388   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:28.689450   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:28.699880   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:28.699895   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:28.699900   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:28.720250   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:28.720260   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:28.731930   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:28.731940   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:28.745902   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:28.745917   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:28.784753   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:28.784765   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:28.822424   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:28.822440   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:28.835840   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:28.835854   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:28.850701   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:28.850716   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:28.862076   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:28.862088   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:28.874921   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:28.874936   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:28.879749   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:28.879756   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:28.891157   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:28.891168   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:28.907113   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:28.907122   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:28.918637   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:28.918648   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:28.955484   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:28.955494   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:28.966512   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:28.966524   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:28.978024   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:28.978034   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:28.995017   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:28.995028   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:29.008906   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:29.008918   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:31.534639   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:36.536883   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:36.537150   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:36.561704   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:36.561807   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:36.577914   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:36.578005   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:36.590844   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:36.590919   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:36.601926   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:36.602000   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:36.612974   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:36.613047   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:36.628093   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:36.628166   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:36.638082   21725 logs.go:276] 0 containers: []
	W0318 05:06:36.638094   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:36.638155   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:36.652968   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:36.652983   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:36.652989   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:36.665011   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:36.665023   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:36.704953   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:36.704975   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:36.715604   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:36.715617   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:36.730589   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:36.730600   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:36.745270   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:36.745287   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:36.757037   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:36.757050   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:36.769551   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:36.769566   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:36.781366   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:36.781378   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:36.821223   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:36.821236   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:36.836102   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:36.836115   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:36.847539   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:36.847551   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:36.864513   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:36.864523   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:36.876534   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:36.876545   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:36.893353   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:36.893362   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:36.916821   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:36.916829   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:36.930322   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:36.930335   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:36.970032   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:36.970042   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:36.983331   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:36.983342   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:39.496243   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:44.498890   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:44.499243   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:44.534768   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:44.534901   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:44.552690   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:44.552788   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:44.566559   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:44.566639   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:44.578974   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:44.579049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:44.590062   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:44.590142   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:44.602384   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:44.602450   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:44.612605   21725 logs.go:276] 0 containers: []
	W0318 05:06:44.612618   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:44.612684   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:44.623575   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:44.623592   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:44.623598   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:44.635589   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:44.635599   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:44.675093   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:44.675105   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:44.688924   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:44.688934   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:44.706113   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:44.706124   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:44.719261   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:44.719276   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:44.724088   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:44.724094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:44.737759   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:44.737773   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:44.751929   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:44.751939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:44.767836   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:44.767847   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:44.782181   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:44.782192   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:44.805768   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:44.805778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:44.818507   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:44.818518   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:44.835198   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:44.835210   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:44.854409   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:44.854421   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:44.872121   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:44.872132   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:44.913455   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:44.913465   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:44.948029   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:44.948040   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:44.961982   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:44.961992   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:47.474959   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:52.477284   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:52.477734   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:52.515401   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:52.515540   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:52.537499   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:52.537623   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:52.552290   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:52.552373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:52.564345   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:52.564413   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:52.575756   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:52.575826   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:52.586980   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:52.587049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:52.602190   21725 logs.go:276] 0 containers: []
	W0318 05:06:52.602204   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:52.602272   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:52.617329   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:52.617344   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:52.617352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:52.629712   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:52.629722   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:52.668867   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:52.668878   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:52.688362   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:52.688375   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:52.702516   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:52.702527   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:52.718805   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:52.718816   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:52.730632   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:52.730644   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:52.742527   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:52.742538   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:52.782880   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:52.782893   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:52.821256   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:52.821270   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:52.838584   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:52.838596   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:52.861721   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:52.861731   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:52.884855   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:52.884862   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:52.896708   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:52.896719   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:52.900921   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:52.900928   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:52.914843   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:52.914854   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:52.926920   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:52.926930   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:52.938802   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:52.938814   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:52.953444   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:52.953458   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:55.466647   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:00.469121   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:00.469364   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:00.489226   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:00.489321   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:00.502525   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:00.502604   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:00.514370   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:00.514453   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:00.524839   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:00.524915   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:00.535186   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:00.535255   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:00.545880   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:00.545957   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:00.558103   21725 logs.go:276] 0 containers: []
	W0318 05:07:00.558115   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:00.558180   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:00.569218   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:00.569235   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:00.569241   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:00.580859   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:00.580872   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:00.605912   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:00.605930   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:00.610537   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:00.610543   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:00.626082   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:00.626098   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:00.643037   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:00.643049   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:00.655263   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:00.655275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:00.669851   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:00.669864   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:00.681636   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:00.681647   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:00.695663   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:00.695674   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:00.707055   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:00.707067   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:00.749164   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:00.749178   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:00.791124   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:00.791136   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:00.805575   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:00.805586   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:00.821640   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:00.821654   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:00.837447   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:00.837462   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:00.854353   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:00.854365   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:00.890602   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:00.890614   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:00.905603   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:00.905614   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:03.419125   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:08.421793   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:08.422249   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:08.459368   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:08.459503   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:08.482205   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:08.482304   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:08.495885   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:08.495953   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:08.507572   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:08.507648   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:08.518258   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:08.518322   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:08.529348   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:08.529416   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:08.539422   21725 logs.go:276] 0 containers: []
	W0318 05:07:08.539434   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:08.539489   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:08.550539   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:08.550554   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:08.550560   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:08.592797   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:08.592807   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:08.607132   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:08.607141   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:08.619056   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:08.619068   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:08.630699   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:08.630709   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:08.647911   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:08.647924   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:08.663418   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:08.663429   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:08.675669   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:08.675683   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:08.716521   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:08.716535   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:08.730626   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:08.730636   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:08.769001   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:08.769012   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:08.782653   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:08.782665   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:08.794183   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:08.794195   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:08.810856   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:08.810867   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:08.822458   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:08.822470   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:08.845019   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:08.845027   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:08.849628   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:08.849634   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:08.861131   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:08.861144   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:08.873387   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:08.873398   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:11.394417   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:16.396967   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:16.397207   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:16.413999   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:16.414089   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:16.426655   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:16.426728   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:16.437467   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:16.437534   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:16.447773   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:16.447849   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:16.461396   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:16.461461   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:16.471592   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:16.471665   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:16.481263   21725 logs.go:276] 0 containers: []
	W0318 05:07:16.481274   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:16.481326   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:16.497691   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:16.497704   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:16.497709   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:16.534823   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:16.534837   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:16.546389   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:16.546403   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:16.557372   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:16.557386   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:16.573858   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:16.573868   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:16.587577   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:16.587591   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:16.599418   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:16.599428   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:16.612049   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:16.612061   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:16.653005   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:16.653014   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:16.688150   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:16.688161   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:16.702223   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:16.702234   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:16.719175   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:16.719187   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:16.724231   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:16.724239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:16.736359   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:16.736371   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:16.753420   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:16.753432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:16.764461   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:16.764474   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:16.786743   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:16.786751   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:16.800683   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:16.800697   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:16.815181   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:16.815194   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:19.329350   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:24.331588   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:24.331698   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:24.343914   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:24.343994   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:24.364134   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:24.364213   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:24.377348   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:24.377438   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:24.389211   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:24.389288   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:24.400906   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:24.400987   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:24.413102   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:24.413186   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:24.424134   21725 logs.go:276] 0 containers: []
	W0318 05:07:24.424147   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:24.424211   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:24.436173   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:24.436192   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:24.436198   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:24.451227   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:24.451240   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:24.466320   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:24.466332   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:24.513129   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:24.513150   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:24.526204   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:24.526216   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:24.542796   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:24.542809   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:24.566712   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:24.566729   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:24.571772   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:24.571782   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:24.588832   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:24.588844   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:24.607190   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:24.607204   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:24.622492   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:24.622508   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:24.634913   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:24.634924   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:24.650431   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:24.650445   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:24.691758   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:24.691776   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:24.729006   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:24.729020   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:24.743698   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:24.743712   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:24.768977   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:24.768991   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:24.786394   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:24.786407   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:24.804334   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:24.804350   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:27.317847   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:32.320054   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:32.320238   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:32.335915   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:32.336007   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:32.348603   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:32.348670   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:32.365035   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:32.365104   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:32.375890   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:32.375965   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:32.386554   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:32.386622   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:32.397481   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:32.397556   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:32.408204   21725 logs.go:276] 0 containers: []
	W0318 05:07:32.408216   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:32.408276   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:32.419058   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:32.419074   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:32.419080   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:32.435880   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:32.435891   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:32.447904   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:32.447916   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:32.464424   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:32.464437   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:32.478505   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:32.478515   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:32.517518   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:32.517534   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:32.531534   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:32.531544   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:32.545879   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:32.545889   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:32.557215   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:32.557227   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:32.579247   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:32.579258   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:32.584409   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:32.584417   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:32.596128   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:32.596138   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:32.608050   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:32.608059   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:32.626122   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:32.626134   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:32.639528   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:32.639540   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:32.675430   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:32.675442   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:32.688633   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:32.688644   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:32.701508   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:32.701519   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:32.740384   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:32.740398   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:35.266293   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:40.268369   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:40.268414   21725 kubeadm.go:591] duration metric: took 4m6.421998042s to restartPrimaryControlPlane
	W0318 05:07:40.268451   21725 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 05:07:40.268470   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 05:07:41.317558   21725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.049109833s)
	I0318 05:07:41.317641   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 05:07:41.322444   21725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:07:41.325332   21725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:07:41.328075   21725 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:07:41.328084   21725 kubeadm.go:156] found existing configuration files:
	
	I0318 05:07:41.328103   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf
	I0318 05:07:41.331189   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:07:41.331216   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:07:41.333989   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf
	I0318 05:07:41.336406   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:07:41.336427   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:07:41.339382   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf
	I0318 05:07:41.342056   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:07:41.342079   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:07:41.344510   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf
	I0318 05:07:41.347467   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:07:41.347490   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:07:41.350616   21725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 05:07:41.366858   21725 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 05:07:41.366896   21725 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 05:07:41.414505   21725 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 05:07:41.414662   21725 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 05:07:41.414712   21725 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 05:07:41.466127   21725 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 05:07:41.470305   21725 out.go:204]   - Generating certificates and keys ...
	I0318 05:07:41.470337   21725 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 05:07:41.470368   21725 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 05:07:41.470400   21725 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 05:07:41.470426   21725 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 05:07:41.470461   21725 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 05:07:41.470492   21725 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 05:07:41.470521   21725 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 05:07:41.470548   21725 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 05:07:41.470585   21725 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 05:07:41.470624   21725 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 05:07:41.470647   21725 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 05:07:41.470673   21725 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 05:07:41.560038   21725 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 05:07:41.794443   21725 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 05:07:42.026427   21725 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 05:07:42.180444   21725 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 05:07:42.210309   21725 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 05:07:42.212157   21725 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 05:07:42.212180   21725 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 05:07:42.301278   21725 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 05:07:42.304045   21725 out.go:204]   - Booting up control plane ...
	I0318 05:07:42.304087   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 05:07:42.304127   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 05:07:42.304164   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 05:07:42.304204   21725 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 05:07:42.304281   21725 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 05:07:47.308751   21725 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.005306 seconds
	I0318 05:07:47.308856   21725 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 05:07:47.315306   21725 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 05:07:47.823437   21725 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 05:07:47.823527   21725 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-349000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 05:07:48.327707   21725 kubeadm.go:309] [bootstrap-token] Using token: d44j0d.tbclig13jiu1wa7k
	I0318 05:07:48.333908   21725 out.go:204]   - Configuring RBAC rules ...
	I0318 05:07:48.333978   21725 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 05:07:48.334027   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 05:07:48.338310   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 05:07:48.339177   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 05:07:48.339884   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 05:07:48.340795   21725 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 05:07:48.343957   21725 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 05:07:48.522032   21725 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 05:07:48.731633   21725 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 05:07:48.732225   21725 kubeadm.go:309] 
	I0318 05:07:48.732265   21725 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 05:07:48.732270   21725 kubeadm.go:309] 
	I0318 05:07:48.732317   21725 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 05:07:48.732320   21725 kubeadm.go:309] 
	I0318 05:07:48.732333   21725 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 05:07:48.732364   21725 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 05:07:48.732392   21725 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 05:07:48.732397   21725 kubeadm.go:309] 
	I0318 05:07:48.732433   21725 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 05:07:48.732436   21725 kubeadm.go:309] 
	I0318 05:07:48.732463   21725 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 05:07:48.732469   21725 kubeadm.go:309] 
	I0318 05:07:48.732497   21725 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 05:07:48.732534   21725 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 05:07:48.732585   21725 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 05:07:48.732589   21725 kubeadm.go:309] 
	I0318 05:07:48.732633   21725 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 05:07:48.732680   21725 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 05:07:48.732684   21725 kubeadm.go:309] 
	I0318 05:07:48.732734   21725 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d44j0d.tbclig13jiu1wa7k \
	I0318 05:07:48.732792   21725 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f \
	I0318 05:07:48.732804   21725 kubeadm.go:309] 	--control-plane 
	I0318 05:07:48.732806   21725 kubeadm.go:309] 
	I0318 05:07:48.732851   21725 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 05:07:48.732859   21725 kubeadm.go:309] 
	I0318 05:07:48.732900   21725 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d44j0d.tbclig13jiu1wa7k \
	I0318 05:07:48.732957   21725 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f 
	I0318 05:07:48.733008   21725 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 05:07:48.733014   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:07:48.733022   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:07:48.736976   21725 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 05:07:48.743922   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 05:07:48.748027   21725 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 05:07:48.753410   21725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 05:07:48.753469   21725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 05:07:48.753486   21725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-349000 minikube.k8s.io/updated_at=2024_03_18T05_07_48_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=running-upgrade-349000 minikube.k8s.io/primary=true
	I0318 05:07:48.807022   21725 kubeadm.go:1107] duration metric: took 53.60575ms to wait for elevateKubeSystemPrivileges
	I0318 05:07:48.807041   21725 ops.go:34] apiserver oom_adj: -16
	W0318 05:07:48.807054   21725 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 05:07:48.807057   21725 kubeadm.go:393] duration metric: took 4m14.974716291s to StartCluster
	I0318 05:07:48.807067   21725 settings.go:142] acquiring lock: {Name:mkc727ca725e75d24ce65050e373ec9e186fcd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:48.807151   21725 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:07:48.807588   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:48.807774   21725 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:07:48.811852   21725 out.go:177] * Verifying Kubernetes components...
	I0318 05:07:48.807862   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:07:48.807831   21725 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 05:07:48.819850   21725 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-349000"
	I0318 05:07:48.819856   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:07:48.819864   21725 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-349000"
	W0318 05:07:48.819871   21725 addons.go:243] addon storage-provisioner should already be in state true
	I0318 05:07:48.819888   21725 host.go:66] Checking if "running-upgrade-349000" exists ...
	I0318 05:07:48.819888   21725 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-349000"
	I0318 05:07:48.819917   21725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-349000"
	I0318 05:07:48.820139   21725 retry.go:31] will retry after 1.334452348s: connect: dial unix /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/monitor: connect: connection refused
	I0318 05:07:48.820995   21725 kapi.go:59] client config for running-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10578ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:07:48.821111   21725 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-349000"
	W0318 05:07:48.821116   21725 addons.go:243] addon default-storageclass should already be in state true
	I0318 05:07:48.821123   21725 host.go:66] Checking if "running-upgrade-349000" exists ...
	I0318 05:07:48.821788   21725 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:48.821793   21725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 05:07:48.821798   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:07:48.908828   21725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:07:48.914015   21725 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:07:48.914060   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:07:48.917765   21725 api_server.go:72] duration metric: took 109.983958ms to wait for apiserver process to appear ...
	I0318 05:07:48.917774   21725 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:07:48.917780   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:48.931465   21725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:50.162144   21725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:07:50.166171   21725 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:50.166183   21725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 05:07:50.166200   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:07:50.208522   21725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:53.919691   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:53.919734   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:58.920023   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:58.920051   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:03.920241   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:03.920270   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:08.920578   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:08.920606   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:13.921005   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:13.921037   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:18.921625   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:18.921645   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 05:08:19.242942   21725 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 05:08:19.247039   21725 out.go:177] * Enabled addons: storage-provisioner
	I0318 05:08:19.254814   21725 addons.go:505] duration metric: took 30.447965334s for enable addons: enabled=[storage-provisioner]
	I0318 05:08:23.922802   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:23.922847   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:28.924116   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:28.924168   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:33.925777   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:33.925813   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:38.927705   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:38.927745   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:43.927911   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:43.927936   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:48.929970   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:48.930136   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:48.940991   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:08:48.941071   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:48.951503   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:08:48.951575   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:48.961707   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:08:48.961768   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:48.972078   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:08:48.972148   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:48.982517   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:08:48.982592   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:48.992856   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:08:48.992916   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:49.003232   21725 logs.go:276] 0 containers: []
	W0318 05:08:49.003244   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:49.003302   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:49.014032   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:08:49.014046   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:08:49.014053   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:08:49.028570   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:08:49.028582   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:08:49.042512   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:08:49.042526   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:08:49.058927   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:49.058941   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:08:49.091884   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:49.091895   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:49.096360   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:08:49.096370   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:08:49.110429   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:08:49.110439   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:08:49.124760   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:08:49.124771   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:08:49.142013   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:49.142022   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:49.166925   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:08:49.166932   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:49.178460   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:49.178472   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:49.219229   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:08:49.219239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:08:49.230619   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:08:49.230633   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:08:51.743804   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:56.746029   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:56.746192   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:56.764260   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:08:56.764355   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:56.778091   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:08:56.778168   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:56.789515   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:08:56.789584   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:56.800587   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:08:56.800657   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:56.815602   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:08:56.815676   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:56.826393   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:08:56.826460   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:56.836345   21725 logs.go:276] 0 containers: []
	W0318 05:08:56.836356   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:56.836416   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:56.849474   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:08:56.849490   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:08:56.849495   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:08:56.863003   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:08:56.863013   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:08:56.874901   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:08:56.874912   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:08:56.886351   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:08:56.886360   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:08:56.901060   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:08:56.901074   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:08:56.912862   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:56.912873   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:56.937311   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:56.937322   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:56.971691   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:08:56.971702   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:08:56.986665   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:08:56.986676   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:08:57.005194   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:08:57.005206   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:08:57.017165   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:08:57.017177   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:57.029461   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:57.029472   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:08:57.065265   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:57.065278   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:59.569640   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:04.571829   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:04.572057   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:04.602961   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:04.603056   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:04.621272   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:04.621346   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:04.636105   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:04.636172   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:04.646554   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:04.646616   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:04.656942   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:04.657007   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:04.667548   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:04.667608   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:04.677680   21725 logs.go:276] 0 containers: []
	W0318 05:09:04.677690   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:04.677743   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:04.688137   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:04.688153   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:04.688158   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:04.762837   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:04.762849   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:04.777199   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:04.777211   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:04.788454   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:04.788465   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:04.799624   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:04.799639   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:04.811738   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:04.811750   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:04.826260   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:04.826274   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:04.849098   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:04.849107   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:04.882593   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:04.882603   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:04.887606   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:04.887615   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:04.902393   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:04.902404   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:04.917611   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:04.917621   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:04.929556   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:04.929567   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:07.453676   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:12.455747   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:12.455908   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:12.468591   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:12.468668   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:12.479107   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:12.479172   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:12.489667   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:12.489745   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:12.499750   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:12.499819   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:12.510039   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:12.510114   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:12.520951   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:12.521018   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:12.531528   21725 logs.go:276] 0 containers: []
	W0318 05:09:12.531542   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:12.531603   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:12.542830   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:12.542847   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:12.542854   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:12.579418   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:12.579432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:12.593741   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:12.593752   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:12.610900   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:12.610914   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:12.635306   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:12.635318   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:12.641477   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:12.641490   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:12.656594   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:12.656608   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:12.668599   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:12.668613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:12.681107   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:12.681120   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:12.693904   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:12.693914   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:12.711907   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:12.711921   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:12.724025   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:12.724037   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:12.735712   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:12.735725   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:15.272755   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:20.274897   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:20.275060   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:20.289822   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:20.289898   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:20.301831   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:20.301901   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:20.319130   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:20.319195   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:20.334288   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:20.334357   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:20.344230   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:20.344293   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:20.357504   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:20.357571   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:20.367296   21725 logs.go:276] 0 containers: []
	W0318 05:09:20.367309   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:20.367367   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:20.377613   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:20.377628   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:20.377634   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:20.410353   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:20.410362   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:20.415255   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:20.415263   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:20.429586   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:20.429598   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:20.445501   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:20.445512   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:20.457071   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:20.457083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:20.480201   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:20.480208   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:20.516738   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:20.516751   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:20.531569   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:20.531583   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:20.543048   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:20.543061   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:20.555001   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:20.555013   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:20.570458   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:20.570469   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:20.595542   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:20.595557   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:23.110634   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:28.112725   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:28.112878   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:28.127955   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:28.128049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:28.140109   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:28.140189   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:28.150836   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:28.150904   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:28.163217   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:28.163281   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:28.173885   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:28.173962   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:28.188576   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:28.188634   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:28.198601   21725 logs.go:276] 0 containers: []
	W0318 05:09:28.198613   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:28.198666   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:28.209182   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:28.209198   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:28.209204   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:28.245042   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:28.245055   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:28.259403   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:28.259416   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:28.277429   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:28.277440   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:28.289258   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:28.289269   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:28.323052   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:28.323061   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:28.327566   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:28.327574   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:28.339205   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:28.339217   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:28.355929   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:28.355939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:28.367629   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:28.367643   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:28.378606   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:28.378616   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:28.403520   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:28.403529   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:28.417415   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:28.417425   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:30.930963   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:35.933115   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:35.933286   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:35.945907   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:35.945980   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:35.961313   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:35.961381   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:35.971983   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:35.972054   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:35.982542   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:35.982618   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:35.992632   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:35.992707   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:36.002763   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:36.002833   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:36.016878   21725 logs.go:276] 0 containers: []
	W0318 05:09:36.016890   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:36.016952   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:36.026877   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:36.026894   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:36.026899   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:36.044915   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:36.044924   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:36.057084   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:36.057099   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:36.091872   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:36.091882   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:36.096485   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:36.096492   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:36.134351   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:36.134365   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:36.146025   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:36.146036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:36.158545   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:36.158554   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:36.170656   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:36.170667   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:36.194855   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:36.194863   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:36.209872   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:36.209881   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:36.223514   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:36.223524   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:36.234922   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:36.234931   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:38.751288   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:43.753452   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:43.753632   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:43.771060   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:43.771163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:43.789681   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:43.789773   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:43.800487   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:43.800572   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:43.810804   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:43.810872   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:43.820911   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:43.820992   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:43.831215   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:43.831296   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:43.841415   21725 logs.go:276] 0 containers: []
	W0318 05:09:43.841430   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:43.841504   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:43.851702   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:43.851717   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:43.851724   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:43.862808   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:43.862818   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:43.867299   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:43.867307   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:43.902270   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:43.902283   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:43.916912   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:43.916923   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:43.928568   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:43.928580   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:43.952381   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:43.952391   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:43.969450   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:43.969461   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:43.981174   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:43.981185   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:44.015421   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:44.015432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:44.034814   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:44.034828   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:44.046685   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:44.046697   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:44.058438   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:44.058448   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:46.575286   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:51.577552   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:51.577778   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:51.592871   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:51.592954   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:51.604009   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:51.604074   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:51.614813   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:51.614878   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:51.625644   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:51.625720   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:51.636691   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:51.636762   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:51.647455   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:51.647527   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:51.657668   21725 logs.go:276] 0 containers: []
	W0318 05:09:51.657679   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:51.657744   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:51.668759   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:51.668774   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:51.668779   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:51.680401   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:51.680410   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:51.694699   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:51.694710   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:51.717496   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:51.717506   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:51.729073   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:51.729087   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:51.752625   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:51.752635   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:51.756820   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:51.756828   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:51.792620   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:51.792630   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:51.803942   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:51.803953   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:51.816406   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:51.816418   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:51.828072   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:51.828083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:51.862237   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:51.862247   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:51.882525   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:51.882536   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:54.398692   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:59.399398   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:59.399577   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:59.422057   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:59.422152   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:59.437304   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:59.437386   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:59.450449   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:59.450520   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:59.471961   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:59.472028   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:59.484668   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:59.484740   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:59.509226   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:59.509280   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:59.532407   21725 logs.go:276] 0 containers: []
	W0318 05:09:59.532419   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:59.532478   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:59.546717   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:59.546738   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:59.546744   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:59.573709   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:59.573730   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:59.595520   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:59.595533   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:59.637953   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:59.637967   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:59.642834   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:59.642842   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:59.654339   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:59.654351   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:59.672114   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:59.672127   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:59.685711   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:59.685722   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:59.696831   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:59.696842   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:59.760248   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:59.760261   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:59.790601   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:59.790616   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:59.809662   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:59.809675   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:59.821845   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:59.821860   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:02.338642   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:07.340807   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:07.341163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:07.379193   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:07.379324   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:07.398312   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:07.400873   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:07.420826   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:07.420905   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:07.434257   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:07.434329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:07.445173   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:07.445240   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:07.456332   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:07.456397   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:07.467719   21725 logs.go:276] 0 containers: []
	W0318 05:10:07.467730   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:07.467793   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:07.481916   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:07.481935   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:07.481941   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:07.496369   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:07.496380   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:07.507590   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:07.507603   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:07.520675   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:07.520686   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:07.532422   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:07.532438   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:07.549545   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:07.549556   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:07.554658   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:07.554666   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:07.566263   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:07.566274   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:07.578597   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:07.578608   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:07.591858   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:07.591869   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:07.628161   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:07.628175   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:07.639810   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:07.639822   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:07.653899   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:07.653910   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:07.668583   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:07.668594   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:07.692098   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:07.692106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:10.226758   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:15.228948   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:15.229219   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:15.253928   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:15.254053   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:15.270773   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:15.270862   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:15.284584   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:15.284650   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:15.295847   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:15.295912   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:15.310703   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:15.310772   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:15.323527   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:15.323597   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:15.333800   21725 logs.go:276] 0 containers: []
	W0318 05:10:15.333809   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:15.333874   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:15.344844   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:15.344864   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:15.344871   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:15.350735   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:15.350742   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:15.362303   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:15.362315   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:15.373695   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:15.373708   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:15.408436   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:15.408450   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:15.428402   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:15.428414   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:15.443003   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:15.443013   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:15.467942   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:15.467957   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:15.482213   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:15.482225   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:15.493613   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:15.493626   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:15.506352   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:15.506364   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:15.518586   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:15.518597   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:15.530328   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:15.530338   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:15.548213   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:15.548224   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:15.560159   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:15.560171   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:18.102023   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:23.104075   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:23.104233   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:23.117292   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:23.117373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:23.128421   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:23.128486   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:23.138529   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:23.138602   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:23.148944   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:23.149016   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:23.161850   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:23.161917   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:23.172318   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:23.172391   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:23.187178   21725 logs.go:276] 0 containers: []
	W0318 05:10:23.187194   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:23.187259   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:23.197533   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:23.197549   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:23.197555   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:23.212997   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:23.213009   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:23.238125   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:23.238132   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:23.252388   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:23.252399   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:23.264506   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:23.264517   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:23.275937   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:23.275948   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:23.287926   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:23.287937   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:23.321365   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:23.321375   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:23.335111   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:23.335123   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:23.346475   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:23.346491   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:23.358485   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:23.358496   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:23.370001   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:23.370013   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:23.374552   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:23.374557   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:23.409889   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:23.409902   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:23.421736   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:23.421748   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:25.941125   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:30.943298   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:30.943427   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:30.955559   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:30.955634   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:30.965822   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:30.965894   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:30.980233   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:30.980303   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:30.990802   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:30.990876   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:31.001684   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:31.001752   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:31.012576   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:31.012645   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:31.022648   21725 logs.go:276] 0 containers: []
	W0318 05:10:31.022659   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:31.022720   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:31.042890   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:31.042908   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:31.042915   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:31.054676   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:31.054687   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:31.066097   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:31.066106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:31.079656   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:31.079668   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:31.097108   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:31.097118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:31.108913   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:31.108926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:31.142623   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:31.142632   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:31.147075   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:31.147084   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:31.158593   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:31.158607   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:31.179725   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:31.179738   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:31.191304   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:31.191314   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:31.205915   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:31.205926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:31.241103   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:31.241115   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:31.255677   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:31.255689   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:31.271062   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:31.271073   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:33.797423   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:38.798715   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:38.798925   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:38.826853   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:38.826933   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:38.838404   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:38.838483   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:38.849198   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:38.849266   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:38.863716   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:38.863784   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:38.874097   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:38.874163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:38.885212   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:38.885274   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:38.895652   21725 logs.go:276] 0 containers: []
	W0318 05:10:38.895663   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:38.895716   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:38.906970   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:38.906988   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:38.906993   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:38.919139   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:38.919149   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:38.930599   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:38.930613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:38.941943   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:38.941976   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:38.976707   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:38.976719   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:38.981732   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:38.981741   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:39.017377   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:39.017391   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:39.031766   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:39.031778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:39.049522   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:39.049535   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:39.061145   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:39.061156   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:39.076097   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:39.076106   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:39.088058   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:39.088073   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:39.113136   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:39.113145   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:39.127049   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:39.127063   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:39.138095   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:39.138105   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:41.651138   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:46.653191   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:46.653344   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:46.664177   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:46.664263   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:46.674805   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:46.674880   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:46.685623   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:46.685700   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:46.696208   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:46.696273   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:46.707014   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:46.707100   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:46.718079   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:46.718144   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:46.727773   21725 logs.go:276] 0 containers: []
	W0318 05:10:46.727783   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:46.727839   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:46.737943   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:46.737958   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:46.737963   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:46.742614   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:46.742624   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:46.754025   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:46.754036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:46.765285   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:46.765297   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:46.777190   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:46.777201   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:46.789080   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:46.789094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:46.801054   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:46.801066   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:46.836656   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:46.836888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:46.852822   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:46.852839   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:46.864732   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:46.864746   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:46.890496   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:46.890511   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:46.902174   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:46.902185   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:46.937571   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:46.937585   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:46.952086   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:46.952099   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:46.966728   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:46.966747   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:49.492148   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:54.492449   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:54.492703   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:54.519252   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:54.519376   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:54.536522   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:54.536597   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:54.550165   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:54.550246   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:54.561451   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:54.561521   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:54.571824   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:54.571893   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:54.582460   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:54.582538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:54.592021   21725 logs.go:276] 0 containers: []
	W0318 05:10:54.592038   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:54.592090   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:54.604302   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:54.604320   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:54.604325   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:54.618791   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:54.618804   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:54.634428   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:54.634440   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:54.671704   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:54.671715   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:54.683445   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:54.683458   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:54.697715   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:54.697727   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:54.721947   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:54.721957   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:54.762121   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:54.762137   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:54.774601   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:54.774613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:54.790792   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:54.790803   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:54.803422   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:54.803433   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:54.815213   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:54.815224   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:54.829571   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:54.829584   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:54.841558   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:54.841573   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:54.866729   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:54.866744   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:57.372929   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:02.373811   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:02.374044   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:02.397623   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:02.398816   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:02.414570   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:02.414649   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:02.427583   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:02.427658   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:02.438820   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:02.438877   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:02.448902   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:02.448961   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:02.459389   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:02.459461   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:02.469606   21725 logs.go:276] 0 containers: []
	W0318 05:11:02.469618   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:02.469674   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:02.481204   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:02.481226   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:02.481232   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:02.486245   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:02.486252   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:02.503931   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:02.503943   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:02.531726   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:02.531739   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:02.564739   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:02.564747   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:02.576820   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:02.576831   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:02.588982   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:02.588997   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:02.602566   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:02.602578   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:02.614574   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:02.614587   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:02.636273   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:02.636288   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:02.647509   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:02.647524   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:02.659053   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:02.659069   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:02.673208   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:02.673219   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:02.690983   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:02.690996   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:02.715849   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:02.715861   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:05.252486   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:10.254167   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:10.254468   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:10.280659   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:10.280782   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:10.297789   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:10.297870   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:10.311278   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:10.311363   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:10.322617   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:10.322684   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:10.332790   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:10.332858   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:10.345125   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:10.345206   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:10.355757   21725 logs.go:276] 0 containers: []
	W0318 05:11:10.355771   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:10.355836   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:10.366612   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:10.366633   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:10.366639   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:10.381180   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:10.381193   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:10.395078   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:10.395092   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:10.407203   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:10.407218   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:10.425279   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:10.425291   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:10.448026   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:10.448033   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:10.460514   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:10.460528   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:10.472646   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:10.472661   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:10.487375   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:10.487388   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:10.498824   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:10.498836   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:10.510222   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:10.510236   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:10.514692   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:10.514702   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:10.552336   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:10.552349   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:10.566842   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:10.566856   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:10.600207   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:10.600220   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:13.118344   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:18.120136   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:18.120314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:18.135702   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:18.135781   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:18.153773   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:18.153844   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:18.164674   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:18.164745   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:18.174829   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:18.174899   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:18.185829   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:18.185893   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:18.196251   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:18.196314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:18.206735   21725 logs.go:276] 0 containers: []
	W0318 05:11:18.206748   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:18.206806   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:18.216815   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:18.216829   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:18.216834   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:18.221267   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:18.221275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:18.232503   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:18.232514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:18.248396   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:18.248407   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:18.266532   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:18.266543   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:18.290481   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:18.290490   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:18.303422   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:18.303435   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:18.337759   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:18.337767   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:18.374456   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:18.374468   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:18.385812   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:18.385822   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:18.398235   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:18.398247   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:18.412863   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:18.412876   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:18.427240   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:18.427257   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:18.438801   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:18.438811   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:18.450555   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:18.450568   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:20.971295   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:25.973116   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:25.973329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:25.985296   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:25.985376   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:25.996332   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:25.996396   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:26.006895   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:26.006971   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:26.017628   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:26.017701   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:26.027591   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:26.027656   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:26.038145   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:26.038216   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:26.048322   21725 logs.go:276] 0 containers: []
	W0318 05:11:26.048333   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:26.048387   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:26.060202   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:26.060221   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:26.060227   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:26.072194   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:26.072206   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:26.086980   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:26.086990   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:26.099082   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:26.099093   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:26.133869   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:26.133881   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:26.148416   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:26.148427   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:26.159606   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:26.159616   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:26.171141   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:26.171150   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:26.176061   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:26.176070   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:26.190647   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:26.190659   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:26.202753   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:26.202764   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:26.227064   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:26.227074   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:26.238601   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:26.238615   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:26.273302   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:26.273309   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:26.284873   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:26.284883   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:28.807102   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:33.809072   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:33.809198   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:33.820338   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:33.820418   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:33.831434   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:33.831506   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:33.842182   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:33.842254   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:33.857320   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:33.857392   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:33.870951   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:33.871022   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:33.881414   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:33.881496   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:33.896511   21725 logs.go:276] 0 containers: []
	W0318 05:11:33.896526   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:33.896589   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:33.908092   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:33.908109   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:33.908115   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:33.941445   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:33.941454   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:33.956743   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:33.956753   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:33.974127   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:33.974138   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:33.986018   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:33.986028   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:33.997729   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:33.997741   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:34.010157   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:34.010169   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:34.021589   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:34.021601   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:34.025932   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:34.025940   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:34.060934   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:34.060946   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:34.075189   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:34.075200   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:34.087192   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:34.087204   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:34.098996   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:34.099008   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:34.112811   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:34.112821   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:34.124157   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:34.124167   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:36.649051   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:41.651110   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:41.651362   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:41.682507   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:41.682625   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:41.698140   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:41.698236   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:41.710891   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:41.710973   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:41.722174   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:41.722243   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:41.733061   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:41.733135   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:41.746700   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:41.746775   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:41.757255   21725 logs.go:276] 0 containers: []
	W0318 05:11:41.757269   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:41.757327   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:41.767756   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:41.767774   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:41.767780   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:41.779584   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:41.779597   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:41.797337   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:41.797347   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:41.808704   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:41.808715   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:41.843887   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:41.843899   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:41.859081   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:41.859094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:41.871844   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:41.871856   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:41.883906   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:41.883918   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:41.919458   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:41.919469   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:41.932231   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:41.932244   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:41.947411   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:41.947426   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:41.959341   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:41.959352   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:41.982013   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:41.982022   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:41.986472   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:41.986482   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:42.000103   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:42.000116   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:44.513871   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:49.516111   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:49.520025   21725 out.go:177] 
	W0318 05:11:49.524039   21725 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 05:11:49.524057   21725 out.go:239] * 
	* 
	W0318 05:11:49.525626   21725 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:11:49.536004   21725 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-18 05:11:49.631278 -0700 PDT m=+1428.654520543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-349000 -n running-upgrade-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-349000 -n running-upgrade-349000: exit status 2 (15.6450045s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-349000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo cat                            | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo cat                            | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo cat                            | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo cat                            | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo                                | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo find                           | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-970000 sudo crio                           | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-970000                                     | cilium-970000             | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:01 PDT |
	| start   | -p kubernetes-upgrade-304000                         | kubernetes-upgrade-304000 | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-969000                             | offline-docker-969000     | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:01 PDT |
	| stop    | -p kubernetes-upgrade-304000                         | kubernetes-upgrade-304000 | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:01 PDT |
	| start   | -p kubernetes-upgrade-304000                         | kubernetes-upgrade-304000 | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-211000                            | minikube                  | jenkins | v1.26.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:02 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-304000                         | kubernetes-upgrade-304000 | jenkins | v1.32.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:01 PDT |
	| start   | -p running-upgrade-349000                            | minikube                  | jenkins | v1.26.0 | 18 Mar 24 05:01 PDT | 18 Mar 24 05:02 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-211000 stop                          | minikube                  | jenkins | v1.26.0 | 18 Mar 24 05:02 PDT | 18 Mar 24 05:02 PDT |
	| start   | -p stopped-upgrade-211000                            | stopped-upgrade-211000    | jenkins | v1.32.0 | 18 Mar 24 05:02 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-349000                            | running-upgrade-349000    | jenkins | v1.32.0 | 18 Mar 24 05:02 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 05:02:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 05:02:57.409453   21725 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:02:57.409575   21725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:57.409578   21725 out.go:304] Setting ErrFile to fd 2...
	I0318 05:02:57.409581   21725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:57.409731   21725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:02:57.410793   21725 out.go:298] Setting JSON to false
	I0318 05:02:57.427580   21725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10950,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:02:57.427649   21725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:02:57.432707   21725 out.go:177] * [running-upgrade-349000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:02:57.439632   21725 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:02:57.443731   21725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:02:57.439730   21725 notify.go:220] Checking for updates...
	I0318 05:02:57.451606   21725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:02:57.454697   21725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:02:57.457682   21725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:02:57.460664   21725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:02:57.463945   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:02:57.467664   21725 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 05:02:57.470655   21725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:02:57.474650   21725 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:02:57.481631   21725 start.go:297] selected driver: qemu2
	I0318 05:02:57.481636   21725 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:57.481679   21725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:02:57.483829   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:02:57.483845   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:02:57.483862   21725 start.go:340] cluster config:
	{Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:57.483906   21725 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:02:57.489587   21725 out.go:177] * Starting "running-upgrade-349000" primary control-plane node in "running-upgrade-349000" cluster
	I0318 05:02:57.493675   21725 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:02:57.493686   21725 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 05:02:57.493693   21725 cache.go:56] Caching tarball of preloaded images
	I0318 05:02:57.493737   21725 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:02:57.493742   21725 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 05:02:57.493785   21725 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/config.json ...
	I0318 05:02:57.494105   21725 start.go:360] acquireMachinesLock for running-upgrade-349000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:03:07.324516   21713 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0318 05:03:07.324749   21713 machine.go:94] provisionDockerMachine start ...
	I0318 05:03:07.324793   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.324931   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.324935   21713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 05:03:07.387993   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 05:03:07.388009   21713 buildroot.go:166] provisioning hostname "stopped-upgrade-211000"
	I0318 05:03:07.388072   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.388178   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.388184   21713 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-211000 && echo "stopped-upgrade-211000" | sudo tee /etc/hostname
	I0318 05:03:07.452831   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-211000
	
	I0318 05:03:07.452878   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.452988   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.452998   21713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 05:03:07.514664   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:07.514676   21713 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18427-19517/.minikube CaCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18427-19517/.minikube}
	I0318 05:03:07.514683   21713 buildroot.go:174] setting up certificates
	I0318 05:03:07.514693   21713 provision.go:84] configureAuth start
	I0318 05:03:07.514699   21713 provision.go:143] copyHostCerts
	I0318 05:03:07.514768   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem, removing ...
	I0318 05:03:07.514774   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem
	I0318 05:03:07.514899   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem (1078 bytes)
	I0318 05:03:07.515071   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem, removing ...
	I0318 05:03:07.515075   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem
	I0318 05:03:07.515120   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem (1123 bytes)
	I0318 05:03:07.515210   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem, removing ...
	I0318 05:03:07.515213   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem
	I0318 05:03:07.515303   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem (1679 bytes)
	I0318 05:03:07.515397   21713 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-211000 san=[127.0.0.1 localhost minikube stopped-upgrade-211000]
	I0318 05:03:08.492169   21725 start.go:364] duration metric: took 10.998404708s to acquireMachinesLock for "running-upgrade-349000"
	I0318 05:03:08.492191   21725 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:03:08.492196   21725 fix.go:54] fixHost starting: 
	I0318 05:03:08.493017   21725 fix.go:112] recreateIfNeeded on running-upgrade-349000: state=Running err=<nil>
	W0318 05:03:08.493025   21725 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:03:08.501231   21725 out.go:177] * Updating the running qemu2 "running-upgrade-349000" VM ...
	I0318 05:03:07.815777   21713 provision.go:177] copyRemoteCerts
	I0318 05:03:07.815829   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 05:03:07.815839   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:07.850297   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 05:03:07.857836   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 05:03:07.865429   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 05:03:07.872252   21713 provision.go:87] duration metric: took 357.561833ms to configureAuth
	I0318 05:03:07.872266   21713 buildroot.go:189] setting minikube options for container-runtime
	I0318 05:03:07.872384   21713 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:03:07.872424   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.872516   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.872521   21713 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 05:03:07.936278   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 05:03:07.936291   21713 buildroot.go:70] root file system type: tmpfs
	I0318 05:03:07.936353   21713 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 05:03:07.936411   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.936529   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.936563   21713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 05:03:08.004689   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 05:03:08.004762   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.004882   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:08.004894   21713 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 05:03:08.381067   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 05:03:08.381083   21713 machine.go:97] duration metric: took 1.056361334s to provisionDockerMachine
	I0318 05:03:08.381091   21713 start.go:293] postStartSetup for "stopped-upgrade-211000" (driver="qemu2")
	I0318 05:03:08.381097   21713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 05:03:08.381162   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 05:03:08.381173   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:08.418075   21713 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 05:03:08.419351   21713 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 05:03:08.419360   21713 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/addons for local assets ...
	I0318 05:03:08.419421   21713 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/files for local assets ...
	I0318 05:03:08.419517   21713 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem -> 199262.pem in /etc/ssl/certs
	I0318 05:03:08.419604   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 05:03:08.422738   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:08.429784   21713 start.go:296] duration metric: took 48.6895ms for postStartSetup
	I0318 05:03:08.429804   21713 fix.go:56] duration metric: took 20.528398875s for fixHost
	I0318 05:03:08.429834   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.429939   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:08.429945   21713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 05:03:08.492090   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710763388.451731087
	
	I0318 05:03:08.492099   21713 fix.go:216] guest clock: 1710763388.451731087
	I0318 05:03:08.492103   21713 fix.go:229] Guest: 2024-03-18 05:03:08.451731087 -0700 PDT Remote: 2024-03-18 05:03:08.429806 -0700 PDT m=+20.658361418 (delta=21.925087ms)
	I0318 05:03:08.492114   21713 fix.go:200] guest clock delta is within tolerance: 21.925087ms
	I0318 05:03:08.492116   21713 start.go:83] releasing machines lock for "stopped-upgrade-211000", held for 20.59072425s
	I0318 05:03:08.492188   21713 ssh_runner.go:195] Run: cat /version.json
	I0318 05:03:08.492196   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:08.492286   21713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 05:03:08.492345   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	W0318 05:03:08.492866   21713 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54474->127.0.0.1:54278: read: connection reset by peer
	I0318 05:03:08.492884   21713 retry.go:31] will retry after 319.920115ms: ssh: handshake failed: read tcp 127.0.0.1:54474->127.0.0.1:54278: read: connection reset by peer
	W0318 05:03:08.523931   21713 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 05:03:08.523991   21713 ssh_runner.go:195] Run: systemctl --version
	I0318 05:03:08.525699   21713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 05:03:08.527208   21713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 05:03:08.527234   21713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 05:03:08.530471   21713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 05:03:08.535182   21713 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 05:03:08.535191   21713 start.go:494] detecting cgroup driver to use...
	I0318 05:03:08.535309   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:08.542405   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 05:03:08.546353   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 05:03:08.549866   21713 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 05:03:08.549899   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 05:03:08.553074   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:08.555938   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 05:03:08.559316   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:08.562726   21713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 05:03:08.566002   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 05:03:08.569158   21713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 05:03:08.571792   21713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 05:03:08.574879   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:08.657180   21713 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 05:03:08.663323   21713 start.go:494] detecting cgroup driver to use...
	I0318 05:03:08.663401   21713 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 05:03:08.670071   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:08.675631   21713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 05:03:08.688196   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:08.693233   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:08.697960   21713 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 05:03:08.750830   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:08.756475   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:08.761918   21713 ssh_runner.go:195] Run: which cri-dockerd
	I0318 05:03:08.763497   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 05:03:08.766740   21713 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 05:03:08.771898   21713 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 05:03:08.843137   21713 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 05:03:08.922042   21713 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 05:03:08.922107   21713 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 05:03:08.928086   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.004971   21713 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:10.146774   21713 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141820333s)
	I0318 05:03:10.146846   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 05:03:10.152251   21713 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 05:03:10.161039   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:10.165672   21713 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 05:03:10.251042   21713 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 05:03:10.332659   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:10.417880   21713 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 05:03:10.424479   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:10.429512   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:10.516642   21713 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 05:03:10.554349   21713 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 05:03:10.554429   21713 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 05:03:10.557716   21713 start.go:562] Will wait 60s for crictl version
	I0318 05:03:10.557765   21713 ssh_runner.go:195] Run: which crictl
	I0318 05:03:10.558934   21713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 05:03:10.574151   21713 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 05:03:10.574220   21713 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:10.590344   21713 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:08.505092   21725 machine.go:94] provisionDockerMachine start ...
	I0318 05:03:08.505158   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.505297   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.505307   21725 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 05:03:08.578868   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-349000
	
	I0318 05:03:08.578884   21725 buildroot.go:166] provisioning hostname "running-upgrade-349000"
	I0318 05:03:08.578923   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.579035   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.579041   21725 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-349000 && echo "running-upgrade-349000" | sudo tee /etc/hostname
	I0318 05:03:08.658302   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-349000
	
	I0318 05:03:08.658342   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.658456   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.658467   21725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-349000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-349000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-349000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 05:03:08.732206   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:08.732221   21725 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18427-19517/.minikube CaCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18427-19517/.minikube}
	I0318 05:03:08.732230   21725 buildroot.go:174] setting up certificates
	I0318 05:03:08.732240   21725 provision.go:84] configureAuth start
	I0318 05:03:08.732245   21725 provision.go:143] copyHostCerts
	I0318 05:03:08.732311   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem, removing ...
	I0318 05:03:08.732320   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem
	I0318 05:03:08.732425   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem (1078 bytes)
	I0318 05:03:08.732592   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem, removing ...
	I0318 05:03:08.732596   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem
	I0318 05:03:08.732637   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem (1123 bytes)
	I0318 05:03:08.732735   21725 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem, removing ...
	I0318 05:03:08.732738   21725 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem
	I0318 05:03:08.732771   21725 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem (1679 bytes)
	I0318 05:03:08.732855   21725 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-349000 san=[127.0.0.1 localhost minikube running-upgrade-349000]
	I0318 05:03:08.883831   21725 provision.go:177] copyRemoteCerts
	I0318 05:03:08.883873   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 05:03:08.883883   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:08.925627   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 05:03:08.933181   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 05:03:08.940345   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 05:03:08.947070   21725 provision.go:87] duration metric: took 214.827625ms to configureAuth
	I0318 05:03:08.947084   21725 buildroot.go:189] setting minikube options for container-runtime
	I0318 05:03:08.947186   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:03:08.947231   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.947319   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:08.947324   21725 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 05:03:09.021434   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 05:03:09.021444   21725 buildroot.go:70] root file system type: tmpfs
	I0318 05:03:09.021497   21725 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 05:03:09.021547   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.021657   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.021690   21725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 05:03:09.098747   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 05:03:09.098813   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.098933   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.098941   21725 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 05:03:09.174095   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:09.174107   21725 machine.go:97] duration metric: took 669.030583ms to provisionDockerMachine
	I0318 05:03:09.174112   21725 start.go:293] postStartSetup for "running-upgrade-349000" (driver="qemu2")
	I0318 05:03:09.174119   21725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 05:03:09.174176   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 05:03:09.174184   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:09.212565   21725 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 05:03:09.213995   21725 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 05:03:09.214003   21725 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/addons for local assets ...
	I0318 05:03:09.214066   21725 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/files for local assets ...
	I0318 05:03:09.214154   21725 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem -> 199262.pem in /etc/ssl/certs
	I0318 05:03:09.214242   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 05:03:09.216814   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:09.223780   21725 start.go:296] duration metric: took 49.663167ms for postStartSetup
	I0318 05:03:09.223795   21725 fix.go:56] duration metric: took 731.624292ms for fixHost
	I0318 05:03:09.223830   21725 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:09.223937   21725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10449dbf0] 0x1044a0450 <nil>  [] 0s} localhost 54315 <nil> <nil>}
	I0318 05:03:09.223942   21725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 05:03:09.297178   21725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710763389.588434346
	
	I0318 05:03:09.297187   21725 fix.go:216] guest clock: 1710763389.588434346
	I0318 05:03:09.297191   21725 fix.go:229] Guest: 2024-03-18 05:03:09.588434346 -0700 PDT Remote: 2024-03-18 05:03:09.223797 -0700 PDT m=+11.838400876 (delta=364.637346ms)
	I0318 05:03:09.297203   21725 fix.go:200] guest clock delta is within tolerance: 364.637346ms
	I0318 05:03:09.297209   21725 start.go:83] releasing machines lock for "running-upgrade-349000", held for 805.05375ms
	I0318 05:03:09.297284   21725 ssh_runner.go:195] Run: cat /version.json
	I0318 05:03:09.297294   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:03:09.297284   21725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 05:03:09.297329   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	W0318 05:03:09.440634   21725 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 05:03:09.440715   21725 ssh_runner.go:195] Run: systemctl --version
	I0318 05:03:09.442442   21725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 05:03:09.444086   21725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 05:03:09.444111   21725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 05:03:09.447486   21725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 05:03:09.452081   21725 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 05:03:09.452089   21725 start.go:494] detecting cgroup driver to use...
	I0318 05:03:09.452165   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:09.457587   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 05:03:09.460375   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 05:03:09.463829   21725 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 05:03:09.463857   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 05:03:09.467403   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:09.470786   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 05:03:09.473584   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:09.476542   21725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 05:03:09.480151   21725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 05:03:09.483145   21725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 05:03:09.486138   21725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 05:03:09.488939   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.592585   21725 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 05:03:09.600847   21725 start.go:494] detecting cgroup driver to use...
	I0318 05:03:09.600922   21725 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 05:03:09.606873   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:09.611318   21725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 05:03:09.617777   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:09.622736   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:09.627494   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:09.632872   21725 ssh_runner.go:195] Run: which cri-dockerd
	I0318 05:03:09.634132   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 05:03:09.636717   21725 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 05:03:09.641646   21725 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 05:03:09.749315   21725 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 05:03:09.850915   21725 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 05:03:09.850974   21725 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 05:03:09.856344   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.957489   21725 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:10.609643   21713 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 05:03:10.609709   21713 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 05:03:10.610905   21713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 05:03:10.615011   21713 kubeadm.go:877] updating cluster {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 05:03:10.615056   21713 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:03:10.615096   21713 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:10.626664   21713 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:10.626678   21713 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:10.626726   21713 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:10.630341   21713 ssh_runner.go:195] Run: which lz4
	I0318 05:03:10.631524   21713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 05:03:10.632850   21713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 05:03:10.632862   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 05:03:11.374146   21713 docker.go:649] duration metric: took 742.674125ms to copy over tarball
	I0318 05:03:11.374220   21713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 05:03:12.524947   21713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.150750458s)
	I0318 05:03:12.524966   21713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 05:03:12.541091   21713 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:12.544083   21713 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 05:03:12.549054   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:12.635173   21713 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:14.292376   21713 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657236917s)
	I0318 05:03:14.292503   21713 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:14.309481   21713 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:14.309490   21713 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:14.309495   21713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 05:03:14.317934   21713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:14.317967   21713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:14.318054   21713 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:14.318239   21713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:14.318249   21713 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:14.318432   21713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:14.318757   21713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:14.319242   21713 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 05:03:14.326989   21713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:14.327018   21713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:14.327125   21713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:14.327206   21713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:14.327395   21713 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 05:03:14.327399   21713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:14.327710   21713 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:14.327948   21713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.280259   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.292117   21713 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 05:03:16.292149   21713 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.292213   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.303725   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 05:03:16.333359   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 05:03:16.345135   21713 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 05:03:16.345157   21713 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 05:03:16.345215   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 05:03:16.357519   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 05:03:16.357635   21713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 05:03:16.359342   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 05:03:16.359365   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 05:03:16.363762   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:16.367352   21713 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 05:03:16.367366   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 05:03:16.378978   21713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 05:03:16.378999   21713 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:16.379057   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0318 05:03:16.394352   21713 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:16.394501   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.394737   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.396780   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.406946   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 05:03:16.407005   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 05:03:16.414800   21713 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 05:03:16.414821   21713 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.414837   21713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 05:03:16.414851   21713 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.414876   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.414876   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.418218   21713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 05:03:16.418233   21713 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.418277   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.427167   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.436518   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 05:03:16.436541   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 05:03:16.436636   21713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:16.447965   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 05:03:16.447990   21713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 05:03:16.448007   21713 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.448023   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 05:03:16.448040   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 05:03:16.448047   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.476046   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 05:03:16.490607   21713 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:16.490621   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 05:03:16.524224   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0318 05:03:16.894212   21713 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:16.894739   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.933331   21713 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 05:03:16.933369   21713 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.933472   21713 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.961600   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 05:03:16.961790   21713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 05:03:16.963818   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 05:03:16.963833   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 05:03:16.991168   21713 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 05:03:16.991182   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 05:03:17.222543   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 05:03:17.222583   21713 cache_images.go:92] duration metric: took 2.913173334s to LoadCachedImages
	W0318 05:03:17.222627   21713 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0318 05:03:17.222633   21713 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 05:03:17.222693   21713 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 05:03:17.222758   21713 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 05:03:17.235969   21713 cni.go:84] Creating CNI manager for ""
	I0318 05:03:17.235981   21713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:03:17.235985   21713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 05:03:17.235994   21713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-211000 NodeName:stopped-upgrade-211000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 05:03:17.236060   21713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-211000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 05:03:17.236121   21713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 05:03:17.238994   21713 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 05:03:17.239023   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 05:03:17.241925   21713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 05:03:17.246688   21713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 05:03:17.251503   21713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 05:03:17.257144   21713 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 05:03:17.258373   21713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 05:03:17.262111   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:17.350115   21713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:03:17.355733   21713 certs.go:68] Setting up /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000 for IP: 10.0.2.15
	I0318 05:03:17.355740   21713 certs.go:194] generating shared ca certs ...
	I0318 05:03:17.355748   21713 certs.go:226] acquiring lock for ca certs: {Name:mk67337f74312fe6750257c43ce98e6fa0b5d738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.355981   21713 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key
	I0318 05:03:17.356018   21713 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key
	I0318 05:03:17.356024   21713 certs.go:256] generating profile certs ...
	I0318 05:03:17.356080   21713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key
	I0318 05:03:17.356097   21713 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c
	I0318 05:03:17.356108   21713 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 05:03:17.420724   21713 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c ...
	I0318 05:03:17.420734   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c: {Name:mk89c7cbcc3e59aca651554e0dcc4a0f6b744ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.421007   21713 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c ...
	I0318 05:03:17.421013   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c: {Name:mk3e238a5423a92cece846889e751e8c55965fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.421143   21713 certs.go:381] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt
	I0318 05:03:17.421270   21713 certs.go:385] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key
	I0318 05:03:17.421399   21713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.key
	I0318 05:03:17.421519   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem (1338 bytes)
	W0318 05:03:17.421538   21713 certs.go:480] ignoring /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926_empty.pem, impossibly tiny 0 bytes
	I0318 05:03:17.421543   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 05:03:17.421565   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem (1078 bytes)
	I0318 05:03:17.421582   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem (1123 bytes)
	I0318 05:03:17.421600   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem (1679 bytes)
	I0318 05:03:17.421640   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:17.421985   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 05:03:17.428961   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 05:03:17.435505   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 05:03:17.442703   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 05:03:17.449924   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 05:03:17.457261   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 05:03:17.463697   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 05:03:17.470517   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 05:03:17.477696   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /usr/share/ca-certificates/199262.pem (1708 bytes)
	I0318 05:03:17.484423   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 05:03:17.490978   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem --> /usr/share/ca-certificates/19926.pem (1338 bytes)
	I0318 05:03:17.498028   21713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 05:03:17.503502   21713 ssh_runner.go:195] Run: openssl version
	I0318 05:03:17.505311   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199262.pem && ln -fs /usr/share/ca-certificates/199262.pem /etc/ssl/certs/199262.pem"
	I0318 05:03:17.508236   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.509561   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:50 /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.509579   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.511361   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199262.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 05:03:17.514716   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 05:03:17.518169   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.519707   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:02 /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.519725   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.521490   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 05:03:17.524429   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19926.pem && ln -fs /usr/share/ca-certificates/19926.pem /etc/ssl/certs/19926.pem"
	I0318 05:03:17.527324   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.528802   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:50 /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.528823   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.530568   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19926.pem /etc/ssl/certs/51391683.0"
	I0318 05:03:17.533892   21713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 05:03:17.535336   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 05:03:17.537441   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 05:03:17.539399   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 05:03:17.541686   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 05:03:17.543547   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 05:03:17.545364   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 05:03:17.547132   21713 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:03:17.547195   21713 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:17.557207   21713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 05:03:17.560176   21713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 05:03:17.560182   21713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 05:03:17.560188   21713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 05:03:17.560209   21713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 05:03:17.562881   21713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:17.562912   21713 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-211000" does not appear in /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:03:17.562926   21713 kubeconfig.go:62] /Users/jenkins/minikube-integration/18427-19517/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-211000" cluster setting kubeconfig missing "stopped-upgrade-211000" context setting]
	I0318 05:03:17.563098   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.563759   21713 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10656aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:03:17.564558   21713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 05:03:17.567147   21713 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-211000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 05:03:17.567153   21713 kubeadm.go:1154] stopping kube-system containers ...
	I0318 05:03:17.567200   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:17.577864   21713 docker.go:483] Stopping containers: [c9608635c8f8 5f01a1c185ea 221a0b4b0ae5 faf1fd770eea f7ba78c6046c 64f4772f7d6b 8dac42bbc563 deb6e2882c0c]
	I0318 05:03:17.577936   21713 ssh_runner.go:195] Run: docker stop c9608635c8f8 5f01a1c185ea 221a0b4b0ae5 faf1fd770eea f7ba78c6046c 64f4772f7d6b 8dac42bbc563 deb6e2882c0c
	I0318 05:03:17.589578   21713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 05:03:17.594867   21713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:03:17.597951   21713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:03:17.597956   21713 kubeadm.go:156] found existing configuration files:
	
	I0318 05:03:17.597976   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf
	I0318 05:03:17.600479   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:03:17.600500   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:03:17.603060   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf
	I0318 05:03:17.606117   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:03:17.606140   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:03:17.608848   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf
	I0318 05:03:17.611164   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:03:17.611184   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:03:17.614145   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf
	I0318 05:03:17.616941   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:03:17.616963   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:03:17.619264   21713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:03:17.622154   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:17.644538   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.184240   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.318866   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.347300   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.369434   21713 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:03:18.369525   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:18.871556   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:19.371555   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:19.376351   21713 api_server.go:72] duration metric: took 1.006952083s to wait for apiserver process to appear ...
	I0318 05:03:19.376360   21713 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:03:19.376387   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:26.698296   21725 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.741323292s)
	I0318 05:03:26.698377   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 05:03:26.702848   21725 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 05:03:26.711437   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:26.716046   21725 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 05:03:26.794939   21725 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 05:03:26.888720   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:26.970123   21725 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 05:03:26.975785   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:26.980589   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:27.070918   21725 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 05:03:27.109355   21725 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 05:03:27.109429   21725 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 05:03:27.111541   21725 start.go:562] Will wait 60s for crictl version
	I0318 05:03:27.111600   21725 ssh_runner.go:195] Run: which crictl
	I0318 05:03:27.113329   21725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 05:03:27.125164   21725 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 05:03:27.125231   21725 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:27.137572   21725 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:27.153006   21725 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 05:03:27.153078   21725 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 05:03:27.154314   21725 kubeadm.go:877] updating cluster {Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 05:03:27.154375   21725 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:03:27.154418   21725 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:27.166407   21725 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:27.166417   21725 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:27.166463   21725 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:27.169387   21725 ssh_runner.go:195] Run: which lz4
	I0318 05:03:27.170691   21725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 05:03:27.171941   21725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 05:03:27.171950   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 05:03:24.378365   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:24.378394   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:27.861365   21725 docker.go:649] duration metric: took 690.727334ms to copy over tarball
	I0318 05:03:27.861413   21725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 05:03:28.974559   21725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.113167792s)
	I0318 05:03:28.974574   21725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 05:03:28.990234   21725 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:28.993241   21725 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 05:03:28.998008   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:29.082972   21725 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:30.771498   21725 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.688563792s)
	I0318 05:03:30.771592   21725 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:30.787188   21725 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:30.787198   21725 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:30.787203   21725 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 05:03:30.793794   21725 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:30.793816   21725 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 05:03:30.793952   21725 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:30.793991   21725 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:30.794026   21725 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:30.794072   21725 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:30.794111   21725 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:30.794183   21725 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:30.802876   21725 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:30.803015   21725 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:30.803046   21725 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:30.803270   21725 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 05:03:30.803273   21725 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:30.803295   21725 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:30.804371   21725 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:30.804898   21725 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:29.378456   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:29.378491   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 05:03:32.694010   21725 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:32.694261   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.714659   21725 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 05:03:32.714692   21725 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.714765   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:32.729031   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 05:03:32.729156   21725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:32.731054   21725 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 05:03:32.731073   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 05:03:32.753347   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.772143   21725 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:32.772156   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 05:03:32.780555   21725 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 05:03:32.780577   21725 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.780638   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:32.783451   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.812948   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.831922   21725 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 05:03:32.831955   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 05:03:32.832003   21725 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 05:03:32.832025   21725 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 05:03:32.832053   21725 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.832025   21725 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.832098   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:32.832099   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:32.834710   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 05:03:32.840881   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.843885   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.845521   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 05:03:32.845927   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 05:03:32.857103   21725 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 05:03:32.857125   21725 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 05:03:32.857180   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 05:03:32.861184   21725 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 05:03:32.861205   21725 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.861260   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:32.867634   21725 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 05:03:32.867654   21725 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.867716   21725 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:32.877482   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 05:03:32.877592   21725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 05:03:32.885326   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 05:03:32.886901   21725 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 05:03:32.886919   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 05:03:32.887028   21725 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 05:03:32.894476   21725 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 05:03:32.894485   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 05:03:32.922780   21725 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 05:03:33.428454   21725 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:33.429111   21725 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.469431   21725 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 05:03:33.469473   21725 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.469582   21725 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:33.490174   21725 cache_images.go:92] duration metric: took 2.70304225s to LoadCachedImages
	W0318 05:03:33.490230   21725 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 05:03:33.490239   21725 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 05:03:33.490312   21725 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-349000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 05:03:33.490393   21725 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 05:03:33.507945   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:03:33.507957   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:03:33.507962   21725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 05:03:33.507971   21725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-349000 NodeName:running-upgrade-349000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 05:03:33.508053   21725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-349000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 05:03:33.508112   21725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 05:03:33.511876   21725 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 05:03:33.511904   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 05:03:33.515031   21725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 05:03:33.520432   21725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 05:03:33.525404   21725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 05:03:33.530683   21725 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 05:03:33.531924   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:33.626822   21725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:03:33.631784   21725 certs.go:68] Setting up /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000 for IP: 10.0.2.15
	I0318 05:03:33.631793   21725 certs.go:194] generating shared ca certs ...
	I0318 05:03:33.631801   21725 certs.go:226] acquiring lock for ca certs: {Name:mk67337f74312fe6750257c43ce98e6fa0b5d738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.631935   21725 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key
	I0318 05:03:33.631970   21725 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key
	I0318 05:03:33.631976   21725 certs.go:256] generating profile certs ...
	I0318 05:03:33.632038   21725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key
	I0318 05:03:33.632054   21725 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0
	I0318 05:03:33.632065   21725 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 05:03:33.711684   21725 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 ...
	I0318 05:03:33.711696   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0: {Name:mk407906b5df038122ffa715219255414a809a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.711969   21725 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0 ...
	I0318 05:03:33.711974   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0: {Name:mkb2110062d9ecb95c1e2a8df75a80d9cd55ba13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.712097   21725 certs.go:381] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt.c00468f0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt
	I0318 05:03:33.713090   21725 certs.go:385] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key.c00468f0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key
	I0318 05:03:33.713256   21725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.key
	I0318 05:03:33.713373   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem (1338 bytes)
	W0318 05:03:33.713397   21725 certs.go:480] ignoring /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926_empty.pem, impossibly tiny 0 bytes
	I0318 05:03:33.713401   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 05:03:33.713419   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem (1078 bytes)
	I0318 05:03:33.713437   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem (1123 bytes)
	I0318 05:03:33.713452   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem (1679 bytes)
	I0318 05:03:33.713491   21725 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:33.713834   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 05:03:33.721684   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 05:03:33.728809   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 05:03:33.735657   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 05:03:33.743454   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 05:03:33.750014   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 05:03:33.756880   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 05:03:33.763923   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 05:03:33.771379   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 05:03:33.778494   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem --> /usr/share/ca-certificates/19926.pem (1338 bytes)
	I0318 05:03:33.785284   21725 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /usr/share/ca-certificates/199262.pem (1708 bytes)
	I0318 05:03:33.792199   21725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 05:03:33.797354   21725 ssh_runner.go:195] Run: openssl version
	I0318 05:03:33.798988   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19926.pem && ln -fs /usr/share/ca-certificates/19926.pem /etc/ssl/certs/19926.pem"
	I0318 05:03:33.802024   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.803335   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:50 /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.803352   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19926.pem
	I0318 05:03:33.805210   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19926.pem /etc/ssl/certs/51391683.0"
	I0318 05:03:33.808133   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199262.pem && ln -fs /usr/share/ca-certificates/199262.pem /etc/ssl/certs/199262.pem"
	I0318 05:03:33.811347   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.812675   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:50 /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.812695   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199262.pem
	I0318 05:03:33.814623   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199262.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 05:03:33.817308   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 05:03:33.820381   21725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.821770   21725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:02 /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.821791   21725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:33.823504   21725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 05:03:33.826566   21725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 05:03:33.828009   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 05:03:33.829899   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 05:03:33.831560   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 05:03:33.833443   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 05:03:33.835687   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 05:03:33.838681   21725 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 05:03:33.840439   21725 kubeadm.go:391] StartCluster: {Name:running-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54379 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:03:33.840508   21725 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:33.850972   21725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 05:03:33.854224   21725 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 05:03:33.854230   21725 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 05:03:33.854233   21725 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 05:03:33.854253   21725 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 05:03:33.857163   21725 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.857432   21725 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-349000" does not appear in /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:03:33.857547   21725 kubeconfig.go:62] /Users/jenkins/minikube-integration/18427-19517/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-349000" cluster setting kubeconfig missing "running-upgrade-349000" context setting]
	I0318 05:03:33.857734   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:33.858130   21725 kapi.go:59] client config for running-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10578ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:03:33.858452   21725 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 05:03:33.861111   21725 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-349000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 05:03:33.861117   21725 kubeadm.go:1154] stopping kube-system containers ...
	I0318 05:03:33.861161   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:33.873391   21725 docker.go:483] Stopping containers: [c92808339edd d988b026b77e d4f26039d08f a4880ca05709 82437f53be1f 1cf5bd1f2f5d 39606e718772 3d1d66d16a8e fb7044aa6fe8 eab46fcf2c4f 08607bd13bb5 0f0ff398976b 979957847e88 09fd4ef3cc7e 525748e95af3 5b5f45df096f 81416833671d b534994d7aae 4dfc21fbd434 b60836a37ed6 0d3907cde91d 99caf181965e]
	I0318 05:03:33.873461   21725 ssh_runner.go:195] Run: docker stop c92808339edd d988b026b77e d4f26039d08f a4880ca05709 82437f53be1f 1cf5bd1f2f5d 39606e718772 3d1d66d16a8e fb7044aa6fe8 eab46fcf2c4f 08607bd13bb5 0f0ff398976b 979957847e88 09fd4ef3cc7e 525748e95af3 5b5f45df096f 81416833671d b534994d7aae 4dfc21fbd434 b60836a37ed6 0d3907cde91d 99caf181965e
	I0318 05:03:33.885464   21725 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 05:03:33.981587   21725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:03:33.984776   21725 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 18 12:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 18 12:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 18 12:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 18 12:02 /etc/kubernetes/scheduler.conf
	
	I0318 05:03:33.984819   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf
	I0318 05:03:33.987883   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.987909   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:03:33.991165   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf
	I0318 05:03:33.994392   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.994416   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:03:33.997009   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf
	I0318 05:03:33.999889   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:33.999915   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:03:34.002985   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf
	I0318 05:03:34.005660   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:34.005681   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:03:34.008311   21725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:03:34.011538   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.046328   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.397306   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.621820   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.656125   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:34.680426   21725 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:03:34.680509   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:35.180810   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:35.682541   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:36.182534   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:36.682479   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:37.182514   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:37.186733   21725 api_server.go:72] duration metric: took 2.506387083s to wait for apiserver process to appear ...
	I0318 05:03:37.186742   21725 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:03:37.186758   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:34.378591   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:34.378611   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:42.187752   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:42.187777   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:39.378828   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:39.378895   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:47.188575   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:47.188642   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:44.379309   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:44.379379   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:52.189107   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:52.189176   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:49.380369   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:49.380460   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:57.189750   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:57.189795   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:54.381749   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:54.381802   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:02.190334   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:02.190409   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:59.383210   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:59.383251   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:07.191196   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:07.191253   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:04.385006   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:04.385058   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:12.192325   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:12.192350   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:09.387248   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:09.387291   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:17.193598   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:17.193629   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:14.389521   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:14.389594   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:22.195249   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:22.195271   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:19.390565   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:19.390848   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:19.418825   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:19.418958   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:19.437989   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:19.438085   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:19.451284   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:19.451360   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:19.462515   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:19.462592   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:19.472537   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:19.472606   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:19.483688   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:19.483756   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:19.493898   21713 logs.go:276] 0 containers: []
	W0318 05:04:19.493909   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:19.493977   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:19.504609   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:19.504636   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:19.504643   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:19.518926   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:19.518938   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:19.538104   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:19.538116   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:19.549258   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:19.549267   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:19.564613   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:19.564625   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:19.602107   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:19.602120   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:19.606051   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:19.606060   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:19.624113   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:19.624123   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:19.641990   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:19.642003   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:19.655755   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:19.655768   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:19.667884   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:19.667896   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:19.679329   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:19.679340   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:19.694608   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:19.694619   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:19.706293   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:19.706307   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:19.820611   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:19.820625   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:19.848707   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:19.848718   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:19.861062   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:19.861073   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:22.388258   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:27.197346   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:27.197397   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:27.389906   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:27.390284   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:27.429529   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:27.429665   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:27.451715   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:27.451818   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:27.466420   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:27.466500   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:27.478784   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:27.478861   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:27.492495   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:27.492574   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:27.503351   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:27.503424   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:27.514052   21713 logs.go:276] 0 containers: []
	W0318 05:04:27.514062   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:27.514122   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:27.525007   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:27.525042   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:27.525047   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:27.551534   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:27.551554   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:27.567027   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:27.567043   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:27.586784   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:27.586794   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:27.611875   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:27.611885   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:27.616072   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:27.616078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:27.629503   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:27.629520   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:27.641408   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:27.641422   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:27.659687   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:27.659706   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:27.671040   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:27.671051   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:27.709008   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:27.709021   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:27.723539   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:27.723552   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:27.736034   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:27.736044   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:27.750304   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:27.750315   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:27.761836   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:27.761851   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:27.775240   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:27.775255   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:27.787117   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:27.787129   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:32.197668   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:32.197704   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:30.328829   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:37.199778   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:37.200034   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:37.223720   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:37.223826   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:37.239468   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:37.239538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:37.252711   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:37.252789   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:37.264258   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:37.264331   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:37.284534   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:37.284610   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:37.295475   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:37.295538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:37.306677   21725 logs.go:276] 0 containers: []
	W0318 05:04:37.306687   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:37.306750   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:37.317306   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:37.317321   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:37.317327   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:37.356395   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:37.356407   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:37.360600   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:37.360608   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:37.374626   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:37.374636   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:37.386800   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:37.386811   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:35.331420   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:35.331679   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:35.351656   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:35.351737   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:35.362892   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:35.362973   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:35.373768   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:35.373849   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:35.384768   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:35.384849   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:35.395278   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:35.395343   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:35.412018   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:35.412083   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:35.429443   21713 logs.go:276] 0 containers: []
	W0318 05:04:35.429457   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:35.429515   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:35.446901   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:35.446921   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:35.446927   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:35.472716   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:35.472728   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:35.487311   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:35.487326   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:35.502488   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:35.502500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:35.520017   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:35.520028   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:35.531667   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:35.531681   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:35.542829   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:35.542840   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:35.547312   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:35.547322   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:35.561971   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:35.561982   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:35.574218   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:35.574231   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:35.592630   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:35.592641   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:35.629534   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:35.629542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:35.665442   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:35.665454   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:35.679227   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:35.679237   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:35.691247   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:35.691261   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:35.703167   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:35.703178   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:35.716568   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:35.716578   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:37.411097   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:37.411108   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:37.422853   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:37.422865   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:37.450291   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:37.450300   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:37.462522   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:37.462532   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:37.552237   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:37.552249   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:37.591827   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:37.591838   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:37.606180   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:37.606194   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:37.620474   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:37.620489   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:37.632091   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:37.632102   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:37.647053   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:37.647067   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:37.658875   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:37.658888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:37.676007   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:37.676018   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:37.687402   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:37.687414   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:37.699507   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:37.699517   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:40.214574   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:38.242295   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:45.216790   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:45.217015   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:45.243744   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:45.243856   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:45.258990   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:45.259062   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:45.271512   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:45.271583   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:45.282264   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:45.282329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:45.293042   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:45.293113   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:45.304657   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:45.304732   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:45.340618   21725 logs.go:276] 0 containers: []
	W0318 05:04:45.340635   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:45.340725   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:45.356451   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:45.356469   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:45.356475   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:45.376724   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:45.376735   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:45.388001   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:45.388015   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:45.404338   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:45.404348   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:45.417037   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:45.417049   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:45.428331   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:45.428342   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:45.440327   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:45.440339   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:45.457384   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:45.457396   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:45.468991   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:45.469004   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:45.473857   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:45.473865   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:45.510629   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:45.510642   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:45.527650   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:45.527660   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:45.538571   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:45.538582   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:45.577956   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:45.577968   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:45.592164   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:45.592174   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:45.609110   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:45.609121   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:45.626435   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:45.626445   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:45.666955   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:45.666963   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:45.680577   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:45.680588   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:43.244502   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:43.244701   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:43.268247   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:43.268344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:43.283299   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:43.283375   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:43.295157   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:43.295238   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:43.307620   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:43.307698   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:43.318488   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:43.318586   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:43.329095   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:43.329169   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:43.339422   21713 logs.go:276] 0 containers: []
	W0318 05:04:43.339435   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:43.339497   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:43.349532   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:43.349553   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:43.349559   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:43.391496   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:43.391507   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:43.404978   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:43.404988   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:43.430699   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:43.430707   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:43.468987   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:43.468997   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:43.493883   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:43.493893   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:43.508880   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:43.508890   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:43.524182   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:43.524194   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:43.535432   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:43.535452   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:43.546852   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:43.546864   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:43.560793   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:43.560804   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:43.571973   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:43.571985   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:43.583548   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:43.583559   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:43.594737   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:43.594748   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:43.609144   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:43.609154   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:43.626895   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:43.626908   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:43.638946   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:43.638956   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:46.144953   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:48.207819   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:51.147145   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:51.147343   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:51.162798   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:51.162890   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:51.179184   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:51.179255   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:51.193957   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:51.194023   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:51.211934   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:51.212011   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:51.222115   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:51.222180   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:51.232256   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:51.232324   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:51.242168   21713 logs.go:276] 0 containers: []
	W0318 05:04:51.242179   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:51.242230   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:51.252561   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:51.252578   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:51.252583   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:51.266780   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:51.266797   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:51.291979   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:51.291992   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:51.303841   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:51.303852   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:51.318911   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:51.318924   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:51.323010   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:51.323017   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:51.337678   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:51.337691   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:51.349098   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:51.349111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:51.369644   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:51.369655   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:51.381175   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:51.381189   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:51.417509   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:51.417523   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:51.440591   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:51.440598   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:51.455493   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:51.455504   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:51.469278   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:51.469290   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:51.483559   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:51.483571   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:51.501199   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:51.501210   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:51.513224   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:51.513241   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:53.209969   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:53.210126   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:53.223885   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:04:53.223971   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:53.235870   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:04:53.235947   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:53.247248   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:04:53.247322   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:53.257723   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:04:53.257784   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:53.268158   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:04:53.268218   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:53.283618   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:04:53.283686   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:53.294339   21725 logs.go:276] 0 containers: []
	W0318 05:04:53.294348   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:53.294399   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:53.305093   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:04:53.305112   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:04:53.305118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:04:53.317347   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:04:53.317360   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:04:53.333351   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:53.333361   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:53.360152   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:53.360159   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:53.399141   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:53.399155   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:53.436124   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:04:53.436139   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:04:53.450641   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:04:53.450652   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:04:53.461459   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:04:53.461471   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:04:53.473114   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:04:53.473123   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:04:53.488734   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:04:53.488747   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:04:53.504876   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:04:53.504888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:04:53.522874   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:04:53.522887   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:53.534785   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:04:53.534801   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:04:53.549338   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:04:53.549352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:04:53.560915   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:53.560926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:53.565742   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:04:53.565748   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:04:53.602317   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:04:53.602331   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:04:53.615999   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:04:53.616011   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:04:53.628147   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:04:53.628159   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:04:56.143624   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:54.053467   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:01.145736   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:01.145857   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:01.157833   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:01.157917   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:01.168951   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:01.169018   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:01.179324   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:01.179399   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:01.194869   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:01.194935   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:01.208042   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:01.208119   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:01.218564   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:01.218633   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:01.228364   21725 logs.go:276] 0 containers: []
	W0318 05:05:01.228375   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:01.228438   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:01.238431   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:01.238448   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:01.238453   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:01.252336   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:01.252348   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:01.266205   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:01.266214   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:01.277652   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:01.277666   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:01.289269   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:01.289282   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:01.306587   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:01.306601   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:01.317927   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:01.317939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:01.332133   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:01.332143   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:01.345689   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:01.345699   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:01.373284   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:01.373292   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:01.413445   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:01.413454   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:01.450127   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:01.450137   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:01.466344   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:01.466358   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:01.477908   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:01.477920   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:01.482883   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:01.482897   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:01.531110   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:01.531124   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:01.542986   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:01.543011   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:01.559880   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:01.559891   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:01.572014   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:01.572027   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:59.055622   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:59.055911   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:59.078766   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:59.078889   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:59.094465   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:59.094547   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:59.107448   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:59.107529   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:59.118145   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:59.118215   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:59.128915   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:59.128990   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:59.139280   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:59.139344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:59.149125   21713 logs.go:276] 0 containers: []
	W0318 05:04:59.149137   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:59.149196   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:59.159813   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:59.159832   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:59.159838   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:59.175321   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:59.175332   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:59.179937   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:59.179947   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:59.191313   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:59.191323   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:59.207276   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:59.207289   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:59.217938   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:59.217950   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:59.230901   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:59.230912   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:59.266178   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:59.266191   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:59.279872   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:59.279886   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:59.294845   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:59.294856   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:59.306870   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:59.306883   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:59.330057   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:59.330064   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:59.366868   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:59.366876   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:59.394878   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:59.394889   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:59.406233   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:59.406248   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:59.417850   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:59.417862   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:59.435049   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:59.435058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:01.950640   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:04.084751   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:06.952855   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:06.953136   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:06.978960   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:06.979085   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:06.995469   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:06.995569   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:07.009036   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:07.009104   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:07.037112   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:07.037189   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:07.047696   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:07.047765   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:07.058343   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:07.058416   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:07.068892   21713 logs.go:276] 0 containers: []
	W0318 05:05:07.068903   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:07.068962   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:07.079668   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:07.079687   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:07.079695   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:07.084069   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:07.084078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:07.109052   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:07.109063   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:07.123863   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:07.123876   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:07.135805   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:07.135820   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:07.151983   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:07.151995   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:07.163914   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:07.163926   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:07.175923   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:07.175934   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:07.213647   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:07.213662   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:07.228238   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:07.228247   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:07.239971   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:07.239986   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:07.258188   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:07.258203   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:07.274153   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:07.274168   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:07.296993   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:07.297004   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:07.333684   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:07.333692   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:07.345925   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:07.345939   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:07.366732   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:07.366743   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:09.087011   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:09.087299   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:09.116590   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:09.116662   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:09.130138   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:09.130202   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:09.141637   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:09.141700   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:09.152919   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:09.152983   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:09.163806   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:09.163865   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:09.174249   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:09.174314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:09.184856   21725 logs.go:276] 0 containers: []
	W0318 05:05:09.184866   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:09.184915   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:09.195638   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:09.195656   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:09.195662   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:09.232925   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:09.232936   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:09.251336   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:09.251347   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:09.263500   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:09.263511   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:09.274370   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:09.274384   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:09.286105   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:09.286118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:09.302413   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:09.302423   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:09.341007   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:09.341014   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:09.345584   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:09.345591   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:09.359359   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:09.359369   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:09.373692   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:09.373702   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:09.384795   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:09.384809   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:09.397757   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:09.397768   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:09.411665   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:09.411675   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:09.438288   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:09.438300   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:09.450642   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:09.450652   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:09.485869   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:09.485883   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:09.499990   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:09.500001   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:09.512058   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:09.512068   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:12.026314   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:09.881853   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:17.029036   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:17.029478   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:17.070683   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:17.070827   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:17.095364   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:17.095481   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:17.111340   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:17.111437   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:17.123253   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:17.123329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:17.133942   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:17.134019   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:17.146260   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:17.146337   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:17.157161   21725 logs.go:276] 0 containers: []
	W0318 05:05:17.157172   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:17.157234   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:17.167762   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:17.167776   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:17.167785   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:17.179562   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:17.179573   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:17.215539   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:17.215550   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:17.227191   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:17.227203   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:17.243266   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:17.243276   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:17.261162   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:17.261177   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:17.277764   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:17.277777   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:17.291272   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:17.291283   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:17.302897   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:17.302908   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:17.314199   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:17.314213   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:17.329042   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:17.329053   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:17.343617   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:17.343628   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:17.369085   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:17.369098   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:17.383262   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:17.383274   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:17.397338   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:17.397352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:14.883814   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:14.884203   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:14.920095   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:14.920239   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:14.946185   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:14.946273   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:14.959853   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:14.959934   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:14.971379   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:14.971460   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:14.982037   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:14.982108   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:14.992619   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:14.992685   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:15.003198   21713 logs.go:276] 0 containers: []
	W0318 05:05:15.003209   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:15.003271   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:15.015100   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:15.015118   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:15.015124   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:15.019775   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:15.019783   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:15.033768   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:15.033780   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:15.045298   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:15.045309   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:15.056884   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:15.056896   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:15.068213   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:15.068226   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:15.094866   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:15.094881   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:15.120580   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:15.120591   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:15.155761   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:15.155772   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:15.169967   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:15.169981   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:15.185181   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:15.185194   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:15.203346   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:15.203356   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:15.241786   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:15.241795   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:15.255710   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:15.255721   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:15.266964   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:15.266975   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:15.280610   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:15.280621   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:15.292887   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:15.292897   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:17.411757   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:17.411767   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:17.453144   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:17.453153   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:17.457481   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:17.457488   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:17.495355   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:17.495367   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:20.008227   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:17.805979   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:25.010952   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:25.011436   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:25.049005   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:25.049139   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:25.069016   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:25.069120   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:25.083846   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:25.083927   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:25.098952   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:25.099030   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:25.110429   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:25.110501   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:25.123085   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:25.123160   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:25.133467   21725 logs.go:276] 0 containers: []
	W0318 05:05:25.133496   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:25.133557   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:25.145080   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:25.145097   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:25.145102   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:25.156935   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:25.156949   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:25.170627   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:25.170639   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:25.175305   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:25.175316   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:25.190315   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:25.190330   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:25.202320   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:25.202331   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:25.220672   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:25.220686   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:25.234314   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:25.234325   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:25.274088   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:25.274096   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:25.308087   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:25.308099   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:25.344912   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:25.344923   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:25.358993   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:25.359004   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:25.369980   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:25.369996   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:25.384029   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:25.384041   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:25.395418   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:25.395427   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:25.411823   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:25.411842   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:25.423908   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:25.423920   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:25.435072   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:25.435083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:25.460867   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:25.460876   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:22.811605   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:22.811762   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:22.823723   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:22.823796   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:22.834903   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:22.834974   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:22.844828   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:22.844905   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:22.857655   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:22.857726   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:22.867826   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:22.867899   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:22.878330   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:22.878413   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:22.888924   21713 logs.go:276] 0 containers: []
	W0318 05:05:22.888937   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:22.888995   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:22.899759   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:22.899782   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:22.899788   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:22.936945   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:22.936952   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:22.961881   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:22.961892   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:22.975779   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:22.975792   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:22.986903   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:22.986917   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:23.000685   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:23.000695   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:23.012037   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:23.012049   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:23.016522   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:23.016528   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:23.052447   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:23.052460   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:23.066807   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:23.066818   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:23.084313   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:23.084323   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:23.105053   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:23.105067   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:23.116649   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:23.116661   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:23.132360   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:23.132372   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:23.144178   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:23.144191   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:23.158880   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:23.158891   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:23.175351   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:23.175363   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:25.701331   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:27.974883   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:30.703546   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:30.703923   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:30.740641   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:30.740784   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:30.758842   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:30.758933   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:30.773276   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:30.773349   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:30.786043   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:30.786127   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:30.796340   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:30.796417   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:30.807536   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:30.807622   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:30.818470   21713 logs.go:276] 0 containers: []
	W0318 05:05:30.818482   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:30.818541   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:30.833067   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:30.833087   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:30.833092   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:30.868946   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:30.868957   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:30.880708   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:30.880720   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:30.892559   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:30.892569   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:30.910059   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:30.910069   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:30.934833   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:30.934846   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:30.949728   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:30.949739   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:30.963775   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:30.963786   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:30.981741   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:30.981755   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:30.996303   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:30.996314   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:31.007646   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:31.007655   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:31.020230   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:31.020241   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:31.058239   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:31.058247   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:31.062052   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:31.062058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:31.076014   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:31.076026   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:31.087068   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:31.087079   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:31.111755   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:31.111771   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:32.977589   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:32.977959   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:33.006536   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:33.006668   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:33.025308   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:33.025398   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:33.039491   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:33.039575   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:33.050743   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:33.050807   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:33.061124   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:33.061198   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:33.071666   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:33.071740   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:33.081765   21725 logs.go:276] 0 containers: []
	W0318 05:05:33.081780   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:33.081836   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:33.097208   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:33.097236   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:33.097242   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:33.136189   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:33.136197   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:33.149928   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:33.149940   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:33.164790   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:33.164801   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:33.176131   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:33.176144   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:33.213662   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:33.213673   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:33.225500   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:33.225514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:33.238640   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:33.238650   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:33.251295   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:33.251308   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:33.256123   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:33.256130   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:33.293025   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:33.293036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:33.304687   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:33.304698   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:33.316480   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:33.316491   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:33.328727   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:33.328738   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:33.346422   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:33.346434   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:33.359078   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:33.359090   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:33.375262   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:33.375275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:33.386505   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:33.386519   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:33.403164   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:33.403175   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:35.930434   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:33.625835   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:40.932468   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:40.932670   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:40.948182   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:40.948271   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:40.961320   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:40.961397   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:40.972001   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:40.972076   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:40.986549   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:40.986626   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:40.997938   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:40.998008   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:41.008830   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:41.008898   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:41.019948   21725 logs.go:276] 0 containers: []
	W0318 05:05:41.019960   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:41.020026   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:41.030558   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:41.030572   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:41.030577   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:41.042197   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:41.042209   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:41.067712   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:41.067721   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:41.081130   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:41.081142   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:41.094503   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:41.094514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:41.106690   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:41.106704   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:41.119662   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:41.119673   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:41.136566   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:41.136581   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:41.151757   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:41.151769   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:41.169768   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:41.169779   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:41.185491   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:41.185502   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:41.196823   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:41.196835   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:41.213483   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:41.213495   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:41.254134   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:41.254147   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:41.259119   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:41.259132   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:41.300959   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:41.300969   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:41.337169   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:41.337181   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:41.348500   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:41.348513   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:41.366933   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:41.366944   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:38.627216   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:38.627364   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:38.647261   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:38.647373   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:38.660870   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:38.660939   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:38.671336   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:38.671402   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:38.681583   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:38.681655   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:38.692233   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:38.692301   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:38.702873   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:38.702948   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:38.712973   21713 logs.go:276] 0 containers: []
	W0318 05:05:38.712983   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:38.713035   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:38.723845   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:38.723864   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:38.723871   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:38.761803   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:38.761814   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:38.773355   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:38.773367   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:38.797564   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:38.797574   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:38.822041   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:38.822054   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:38.836253   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:38.836264   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:38.853395   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:38.853407   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:38.864815   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:38.864825   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:38.882532   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:38.882542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:38.894243   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:38.894253   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:38.907951   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:38.907962   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:38.923802   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:38.923814   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:38.960681   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:38.960690   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:38.964687   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:38.964694   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:38.978659   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:38.978670   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:38.992105   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:38.992114   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:39.003463   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:39.003478   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:41.524659   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:43.880907   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:46.526914   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:46.527070   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:46.543797   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:46.543893   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:46.556558   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:46.556635   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:46.566907   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:46.566977   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:46.577625   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:46.577700   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:46.587605   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:46.587680   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:46.598604   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:46.598673   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:46.608666   21713 logs.go:276] 0 containers: []
	W0318 05:05:46.608676   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:46.608734   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:46.619461   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:46.619479   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:46.619485   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:46.634442   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:46.634453   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:46.646856   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:46.646867   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:46.661895   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:46.661908   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:46.680697   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:46.680710   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:46.692122   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:46.692133   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:46.703601   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:46.703613   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:46.742440   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:46.742458   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:46.746435   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:46.746441   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:46.760396   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:46.760407   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:46.784110   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:46.784124   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:46.795256   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:46.795269   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:46.813082   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:46.813093   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:46.826078   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:46.826087   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:46.849899   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:46.849907   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:46.885076   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:46.885087   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:46.899306   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:46.899316   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:48.883328   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:48.883547   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:48.901745   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:48.901842   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:48.914831   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:48.914910   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:48.931015   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:48.931087   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:48.941844   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:48.941913   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:48.952654   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:48.952719   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:48.963772   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:48.963846   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:48.973888   21725 logs.go:276] 0 containers: []
	W0318 05:05:48.973904   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:48.973969   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:48.988274   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:48.988295   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:48.988302   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:49.001701   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:49.001715   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:49.041139   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:49.041150   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:49.052345   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:49.052356   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:49.064008   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:49.064019   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:49.075321   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:49.075332   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:49.089554   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:49.089564   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:49.101039   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:49.101051   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:49.118359   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:49.118370   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:49.136916   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:49.136927   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:49.176926   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:49.176935   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:49.181657   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:49.181667   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:49.200334   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:49.200343   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:49.214369   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:49.214380   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:49.227664   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:49.227676   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:49.264421   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:49.264434   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:49.276781   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:49.276792   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:49.296978   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:49.296993   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:49.321767   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:49.321774   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:51.834837   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:49.413049   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:56.837174   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:56.837494   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:56.868367   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:05:56.868491   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:56.884892   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:05:56.884974   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:56.898590   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:05:56.898663   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:56.909751   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:05:56.909816   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:56.920665   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:05:56.920730   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:56.931224   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:05:56.931297   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:56.941304   21725 logs.go:276] 0 containers: []
	W0318 05:05:56.941321   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:56.941373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:56.951817   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:05:56.951834   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:05:56.951839   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:05:56.962556   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:05:56.962569   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:05:56.976290   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:05:56.976302   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:05:56.989359   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:56.989370   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:57.014156   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:05:57.014165   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:57.025480   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:57.025494   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:57.059637   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:05:57.059648   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:05:57.073886   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:05:57.073895   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:05:57.085686   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:05:57.085699   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:05:57.104472   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:05:57.104484   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:05:57.116136   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:57.116149   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:57.155392   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:05:57.155401   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:05:57.166966   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:05:57.166977   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:05:57.185533   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:05:57.185545   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:05:57.202093   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:57.202106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:57.206716   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:05:57.206725   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:05:57.220868   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:05:57.220879   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:05:57.258600   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:05:57.258611   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:05:57.276887   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:05:57.276898   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:05:54.415179   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:54.415284   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:54.427488   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:54.427564   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:54.439708   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:54.439775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:54.449921   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:54.449995   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:54.460516   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:54.460591   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:54.473729   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:54.473806   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:54.484232   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:54.484308   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:54.494518   21713 logs.go:276] 0 containers: []
	W0318 05:05:54.494531   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:54.494591   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:54.504923   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:54.504942   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:54.504946   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:54.516433   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:54.516447   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:54.551554   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:54.551570   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:54.566046   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:54.566057   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:54.584006   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:54.584020   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:54.595747   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:54.595759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:54.619115   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:54.619124   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:54.630386   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:54.630397   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:54.668403   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:54.668419   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:54.682855   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:54.682866   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:54.707681   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:54.707692   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:54.720024   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:54.720037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:54.736202   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:54.736215   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:54.740385   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:54.740393   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:54.754429   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:54.754441   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:54.769066   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:54.769075   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:54.779878   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:54.779889   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:57.293656   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:59.790074   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:02.295836   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:02.296175   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:02.322850   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:02.322975   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:02.340619   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:02.340697   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:02.353938   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:02.354003   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:02.365822   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:02.365892   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:02.376390   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:02.376458   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:02.387376   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:02.387441   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:02.397747   21713 logs.go:276] 0 containers: []
	W0318 05:06:02.397757   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:02.397814   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:02.408330   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:02.408349   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:02.408354   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:02.422146   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:02.422155   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:02.433096   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:02.433107   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:02.482325   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:02.482340   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:02.511741   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:02.511751   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:02.529351   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:02.529361   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:02.553577   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:02.553591   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:02.558037   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:02.558044   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:02.573258   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:02.573272   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:02.586376   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:02.586391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:02.597184   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:02.597198   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:02.608248   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:02.608262   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:02.620187   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:02.620201   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:02.647284   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:02.647297   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:02.661434   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:02.661450   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:02.680590   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:02.680603   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:02.719088   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:02.719101   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:04.792542   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:04.792726   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:04.809873   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:04.809960   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:04.820731   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:04.820810   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:04.833939   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:04.834024   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:04.844324   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:04.844392   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:04.855188   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:04.855260   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:04.865967   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:04.866037   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:04.875719   21725 logs.go:276] 0 containers: []
	W0318 05:06:04.875729   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:04.875779   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:04.886372   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:04.886390   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:04.886397   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:04.890927   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:04.890934   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:04.904833   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:04.904844   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:04.919720   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:04.919731   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:04.931804   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:04.931818   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:04.946880   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:04.946893   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:04.958412   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:04.958422   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:04.970499   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:04.970510   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:05.008700   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:05.008714   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:05.022001   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:05.022014   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:05.038454   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:05.038465   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:05.050104   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:05.050114   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:05.074852   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:05.074859   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:05.088610   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:05.088623   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:05.127149   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:05.127160   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:05.165494   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:05.165504   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:05.179162   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:05.179173   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:05.194296   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:05.194308   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:05.207018   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:05.207029   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:05.233123   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:07.727257   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:10.235218   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:10.235386   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:10.245751   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:10.245832   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:10.256979   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:10.257051   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:10.266988   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:10.267056   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:10.277941   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:10.278017   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:10.288110   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:10.288177   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:10.298731   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:10.298800   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:10.309180   21713 logs.go:276] 0 containers: []
	W0318 05:06:10.309193   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:10.309249   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:10.319177   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:10.319197   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:10.319202   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:10.343560   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:10.343570   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:10.361145   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:10.361157   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:10.372998   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:10.373010   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:10.384952   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:10.384969   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:10.389114   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:10.389122   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:10.403350   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:10.403361   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:10.422459   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:10.422469   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:10.439996   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:10.440007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:10.463456   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:10.463463   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:10.498611   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:10.498621   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:10.513529   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:10.513541   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:10.526686   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:10.526699   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:10.543499   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:10.543510   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:10.555418   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:10.555430   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:10.594035   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:10.594046   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:10.608729   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:10.608741   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:12.729401   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:12.729563   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:12.740807   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:12.740884   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:12.751544   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:12.751618   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:12.762188   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:12.762257   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:12.772532   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:12.772596   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:12.783350   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:12.783422   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:12.793806   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:12.793883   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:12.803801   21725 logs.go:276] 0 containers: []
	W0318 05:06:12.803814   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:12.803880   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:12.814278   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:12.814291   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:12.814297   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:12.825650   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:12.825661   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:12.836621   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:12.836632   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:12.849046   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:12.849058   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:12.853645   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:12.853654   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:12.890516   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:12.890527   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:12.904864   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:12.904874   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:12.916361   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:12.916373   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:12.927904   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:12.927915   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:12.941549   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:12.941561   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:12.953093   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:12.953105   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:12.970363   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:12.970374   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:12.983973   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:12.983988   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:12.996121   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:12.996130   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:13.019347   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:13.019356   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:13.057204   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:13.057213   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:13.093507   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:13.093520   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:13.107085   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:13.107097   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:13.123472   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:13.123484   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:15.636976   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:13.122613   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:20.639402   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:20.639624   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:20.672198   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:20.672290   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:20.686793   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:20.686859   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:20.698397   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:20.698470   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:20.708911   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:20.708991   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:20.722275   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:20.722342   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:20.734619   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:20.734692   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:20.745351   21725 logs.go:276] 0 containers: []
	W0318 05:06:20.745364   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:20.745427   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:20.759508   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:20.759525   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:20.759530   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:20.764111   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:20.764119   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:20.780322   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:20.780335   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:20.818753   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:20.818763   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:20.829952   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:20.829964   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:20.846574   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:20.846585   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:20.859789   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:20.859799   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:20.871356   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:20.871369   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:20.895771   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:20.895778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:20.909450   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:20.909460   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:20.947227   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:20.947239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:20.961177   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:20.961188   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:20.975789   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:20.975800   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:20.987162   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:20.987175   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:20.998699   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:20.998710   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:21.035295   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:21.035308   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:21.049763   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:21.049776   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:21.061999   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:21.062010   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:21.073320   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:21.073332   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:18.124651   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:18.124916   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:18.161350   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:18.161450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:18.176575   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:18.176653   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:18.189055   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:18.189129   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:18.201079   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:18.201164   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:18.213628   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:18.213704   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:18.224206   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:18.224275   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:18.240293   21713 logs.go:276] 0 containers: []
	W0318 05:06:18.240305   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:18.240367   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:18.250431   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:18.250448   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:18.250453   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:18.261647   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:18.261656   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:18.278433   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:18.278446   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:18.290064   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:18.290078   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:18.324964   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:18.324976   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:18.336389   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:18.336400   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:18.349675   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:18.349687   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:18.361049   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:18.361060   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:18.397481   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:18.397490   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:18.411269   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:18.411281   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:18.425599   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:18.425609   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:18.437170   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:18.437179   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:18.452000   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:18.452011   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:18.456089   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:18.456095   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:18.469811   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:18.469824   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:18.494315   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:18.494327   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:18.505803   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:18.505813   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:21.028685   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:23.588142   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:26.030891   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:26.031264   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:26.070473   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:26.070619   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:26.090965   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:26.091069   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:26.106879   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:26.106976   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:26.119422   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:26.119495   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:26.130367   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:26.130431   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:26.146006   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:26.146073   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:26.156409   21713 logs.go:276] 0 containers: []
	W0318 05:06:26.156422   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:26.156484   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:26.166782   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:26.166798   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:26.166804   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:26.178521   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:26.178531   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:26.202588   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:26.202596   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:26.214687   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:26.214698   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:26.253364   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:26.253373   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:26.267236   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:26.267248   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:26.281856   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:26.281868   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:26.295472   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:26.295485   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:26.310978   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:26.310988   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:26.328240   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:26.328251   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:26.341960   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:26.341971   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:26.353021   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:26.353034   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:26.357305   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:26.357312   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:26.385668   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:26.385678   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:26.400680   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:26.400691   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:26.441958   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:26.441971   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:26.456609   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:26.456619   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:28.590446   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:28.590747   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:28.612231   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:28.612333   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:28.628239   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:28.628318   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:28.640554   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:28.640629   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:28.651634   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:28.651701   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:28.663158   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:28.663234   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:28.678875   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:28.678947   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:28.689378   21725 logs.go:276] 0 containers: []
	W0318 05:06:28.689388   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:28.689450   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:28.699880   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:28.699895   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:28.699900   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:28.720250   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:28.720260   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:28.731930   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:28.731940   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:28.745902   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:28.745917   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:28.784753   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:28.784765   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:28.822424   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:28.822440   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:28.835840   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:28.835854   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:28.850701   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:28.850716   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:28.862076   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:28.862088   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:28.874921   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:28.874936   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:28.879749   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:28.879756   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:28.891157   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:28.891168   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:28.907113   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:28.907122   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:28.918637   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:28.918648   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:28.955484   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:28.955494   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:28.966512   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:28.966524   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:28.978024   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:28.978034   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:28.995017   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:28.995028   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:29.008906   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:29.008918   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:31.534639   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:28.969434   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:36.536883   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:36.537150   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:36.561704   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:36.561807   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:36.577914   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:36.578005   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:36.590844   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:36.590919   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:36.601926   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:36.602000   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:36.612974   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:36.613047   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:36.628093   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:36.628166   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:36.638082   21725 logs.go:276] 0 containers: []
	W0318 05:06:36.638094   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:36.638155   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:36.652968   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:36.652983   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:36.652989   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:36.665011   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:36.665023   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:36.704953   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:36.704975   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:36.715604   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:36.715617   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:36.730589   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:36.730600   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:36.745270   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:36.745287   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:36.757037   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:36.757050   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:36.769551   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:36.769566   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:36.781366   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:36.781378   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:36.821223   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:36.821236   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:36.836102   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:36.836115   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:36.847539   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:36.847551   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:36.864513   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:36.864523   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:36.876534   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:36.876545   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:36.893353   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:36.893362   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:36.916821   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:36.916829   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:36.930322   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:36.930335   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:36.970032   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:36.970042   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:36.983331   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:36.983342   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:33.971079   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:33.971298   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:33.992349   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:33.992450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:34.007266   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:34.007345   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:34.019257   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:34.019323   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:34.029854   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:34.029927   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:34.044272   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:34.044345   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:34.055346   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:34.055408   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:34.065457   21713 logs.go:276] 0 containers: []
	W0318 05:06:34.065508   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:34.065575   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:34.076124   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:34.076141   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:34.076146   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:34.093532   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:34.093548   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:34.105226   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:34.105237   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:34.116867   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:34.116878   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:34.129025   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:34.129037   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:34.133283   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:34.133291   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:34.166718   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:34.166727   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:34.181109   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:34.181119   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:34.192488   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:34.192500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:34.209045   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:34.209058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:34.220838   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:34.220847   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:34.245744   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:34.245751   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:34.258019   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:34.258032   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:34.272033   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:34.272047   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:34.311251   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:34.311267   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:34.348011   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:34.348025   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:34.362463   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:34.362480   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:36.887577   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:39.496243   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:41.888635   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:41.888823   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:41.904699   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:41.904774   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:41.919820   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:41.919897   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:41.931477   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:41.931550   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:41.941914   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:41.941985   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:41.952367   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:41.952440   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:41.963017   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:41.963089   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:41.977759   21713 logs.go:276] 0 containers: []
	W0318 05:06:41.977774   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:41.977830   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:41.988210   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:41.988229   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:41.988234   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:42.002479   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:42.002493   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:42.014087   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:42.014098   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:42.035858   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:42.035866   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:42.048617   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:42.048629   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:42.053258   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:42.053266   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:42.067316   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:42.067327   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:42.083361   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:42.083374   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:42.101359   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:42.101371   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:42.113569   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:42.113580   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:42.151025   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:42.151033   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:42.175683   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:42.175695   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:42.187230   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:42.187241   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:42.198778   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:42.198792   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:42.212192   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:42.212205   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:42.247835   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:42.247861   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:42.262318   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:42.262331   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:44.498890   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:44.499243   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:44.534768   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:44.534901   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:44.552690   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:44.552788   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:44.566559   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:44.566639   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:44.578974   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:44.579049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:44.590062   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:44.590142   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:44.602384   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:44.602450   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:44.612605   21725 logs.go:276] 0 containers: []
	W0318 05:06:44.612618   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:44.612684   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:44.623575   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:44.623592   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:44.623598   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:44.635589   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:44.635599   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:44.675093   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:44.675105   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:44.688924   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:44.688934   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:44.706113   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:44.706124   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:44.719261   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:44.719276   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:44.724088   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:44.724094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:44.737759   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:44.737773   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:44.751929   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:44.751939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:44.767836   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:44.767847   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:44.782181   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:44.782192   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:44.805768   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:44.805778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:44.818507   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:44.818518   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:44.835198   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:44.835210   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:44.854409   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:44.854421   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:44.872121   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:44.872132   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:44.913455   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:44.913465   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:44.948029   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:44.948040   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:44.961982   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:44.961992   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:44.777434   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:47.474959   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:49.778559   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:49.778806   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:49.807670   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:49.807781   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:49.826965   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:49.827047   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:49.840504   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:49.840574   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:49.852001   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:49.852074   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:49.862379   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:49.862450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:49.872952   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:49.873023   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:49.882876   21713 logs.go:276] 0 containers: []
	W0318 05:06:49.882890   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:49.882947   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:49.895456   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:49.895473   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:49.895478   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:49.908790   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:49.908802   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:49.923944   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:49.923958   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:49.937608   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:49.937622   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:49.959373   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:49.959383   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:49.971037   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:49.971050   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:49.981721   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:49.981732   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:49.985997   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:49.986005   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:49.997532   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:49.997544   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:50.009265   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:50.009280   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:50.026483   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:50.026494   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:50.038115   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:50.038127   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:50.076654   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:50.076664   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:50.112578   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:50.112594   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:50.145688   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:50.145699   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:50.160526   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:50.160540   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:50.174767   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:50.174783   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:52.688426   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:52.477284   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:52.477734   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:52.515401   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:06:52.515540   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:52.537499   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:06:52.537623   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:52.552290   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:06:52.552373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:52.564345   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:06:52.564413   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:52.575756   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:06:52.575826   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:52.586980   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:06:52.587049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:52.602190   21725 logs.go:276] 0 containers: []
	W0318 05:06:52.602204   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:52.602272   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:52.617329   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:06:52.617344   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:06:52.617352   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:06:52.629712   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:52.629722   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:52.668867   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:06:52.668878   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:06:52.688362   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:06:52.688375   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:06:52.702516   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:06:52.702527   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:52.718805   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:06:52.718816   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:06:52.730632   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:06:52.730644   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:06:52.742527   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:52.742538   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:52.782880   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:06:52.782893   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:06:52.821256   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:06:52.821270   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:06:52.838584   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:06:52.838596   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:06:52.861721   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:52.861731   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:52.884855   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:06:52.884862   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:52.896708   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:52.896719   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:52.900921   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:06:52.900928   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:06:52.914843   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:06:52.914854   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:06:52.926920   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:06:52.926930   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:06:52.938802   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:06:52.938814   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:06:52.953444   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:06:52.953458   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:06:55.466647   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:57.690486   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:57.690811   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:57.722647   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:57.722788   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:57.748075   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:57.748159   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:57.761524   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:57.761596   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:57.774681   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:57.774754   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:57.785636   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:57.785699   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:57.796168   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:57.796242   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:00.469121   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:00.469364   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:00.489226   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:00.489321   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:00.502525   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:00.502604   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:00.514370   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:00.514453   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:00.524839   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:00.524915   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:00.535186   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:00.535255   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:00.545880   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:00.545957   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:00.558103   21725 logs.go:276] 0 containers: []
	W0318 05:07:00.558115   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:00.558180   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:00.569218   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:00.569235   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:00.569241   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:00.580859   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:00.580872   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:00.605912   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:00.605930   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:00.610537   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:00.610543   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:00.626082   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:00.626098   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:00.643037   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:00.643049   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:00.655263   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:00.655275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:00.669851   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:00.669864   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:00.681636   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:00.681647   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:00.695663   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:00.695674   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:00.707055   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:00.707067   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:00.749164   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:00.749178   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:00.791124   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:00.791136   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:00.805575   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:00.805586   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:00.821640   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:00.821654   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:00.837447   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:00.837462   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:00.854353   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:00.854365   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:00.890602   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:00.890614   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:00.905603   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:00.905614   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:06:57.806700   21713 logs.go:276] 0 containers: []
	W0318 05:06:57.806712   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:57.806775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:57.816900   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:57.816919   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:57.816924   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:57.839296   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:57.839309   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:57.851064   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:57.851078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:57.876589   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:57.876602   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:57.890752   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:57.890765   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:57.901642   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:57.901654   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:57.914906   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:57.914919   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:57.926628   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:57.926638   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:57.930524   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:57.930533   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:57.944647   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:57.944658   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:57.959272   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:57.959284   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:57.970737   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:57.970748   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:57.982307   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:57.982321   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:57.998863   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:57.998875   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:58.022424   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:58.022433   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:58.059403   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:58.059413   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:58.094456   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:58.094467   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:00.610653   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:03.419125   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:05.612616   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:05.612888   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:05.638106   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:07:05.638237   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:05.655470   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:07:05.655560   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:05.669006   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:07:05.669079   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:05.680413   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:07:05.680489   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:05.692422   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:07:05.692498   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:05.703459   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:07:05.703527   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:05.713765   21713 logs.go:276] 0 containers: []
	W0318 05:07:05.713777   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:05.713837   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:05.724209   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:07:05.724228   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:05.724234   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:05.730775   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:07:05.730784   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:07:05.744999   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:07:05.745013   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:07:05.769900   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:07:05.769911   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:07:05.781746   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:07:05.781756   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:07:05.799120   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:07:05.799131   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:05.817348   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:05.817360   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:05.857686   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:07:05.857709   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:07:05.876331   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:07:05.876343   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:07:05.894849   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:05.894861   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:05.934471   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:07:05.934506   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:07:05.945835   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:07:05.945853   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:07:05.957756   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:07:05.957767   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:05.976834   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:07:05.976843   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:07:05.991188   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:07:05.991197   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:07:06.005197   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:07:06.005208   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:07:06.018407   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:06.018418   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:08.421793   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:08.422249   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:08.459368   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:08.459503   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:08.482205   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:08.482304   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:08.495885   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:08.495953   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:08.507572   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:08.507648   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:08.518258   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:08.518322   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:08.529348   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:08.529416   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:08.539422   21725 logs.go:276] 0 containers: []
	W0318 05:07:08.539434   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:08.539489   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:08.550539   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:08.550554   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:08.550560   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:08.592797   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:08.592807   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:08.607132   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:08.607141   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:08.619056   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:08.619068   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:08.630699   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:08.630709   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:08.647911   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:08.647924   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:08.663418   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:08.663429   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:08.675669   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:08.675683   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:08.716521   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:08.716535   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:08.730626   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:08.730636   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:08.769001   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:08.769012   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:08.782653   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:08.782665   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:08.794183   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:08.794195   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:08.810856   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:08.810867   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:08.822458   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:08.822470   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:08.845019   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:08.845027   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:08.849628   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:08.849634   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:08.861131   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:08.861144   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:08.873387   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:08.873398   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:11.394417   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:08.540133   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:16.396967   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:16.397207   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:16.413999   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:16.414089   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:16.426655   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:16.426728   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:16.437467   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:16.437534   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:16.447773   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:16.447849   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:16.461396   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:16.461461   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:16.471592   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:16.471665   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:16.481263   21725 logs.go:276] 0 containers: []
	W0318 05:07:16.481274   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:16.481326   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:16.497691   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:16.497704   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:16.497709   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:16.534823   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:16.534837   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:16.546389   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:16.546403   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:16.557372   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:16.557386   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:16.573858   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:16.573868   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:16.587577   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:16.587591   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:16.599418   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:16.599428   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:16.612049   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:16.612061   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:16.653005   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:16.653014   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:16.688150   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:16.688161   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:16.702223   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:16.702234   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:16.719175   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:16.719187   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:16.724231   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:16.724239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:16.736359   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:16.736371   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:16.753420   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:16.753432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:16.764461   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:16.764474   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:16.786743   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:16.786751   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:16.800683   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:16.800697   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:16.815181   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:16.815194   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:13.542254   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:13.542670   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:13.584658   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:07:13.584812   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:13.607002   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:07:13.607116   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:13.621539   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:07:13.621613   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:13.634416   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:07:13.634488   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:13.645261   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:07:13.645328   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:13.655660   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:07:13.655729   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:13.672034   21713 logs.go:276] 0 containers: []
	W0318 05:07:13.672049   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:13.672107   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:13.682773   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:07:13.682795   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:07:13.682800   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:07:13.697846   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:13.697860   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:13.733982   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:07:13.733993   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:07:13.761176   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:07:13.761187   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:07:13.772900   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:07:13.772915   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:07:13.791527   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:07:13.791540   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:13.803177   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:13.803190   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:13.807709   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:07:13.807718   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:07:13.821598   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:07:13.821612   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:07:13.840288   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:07:13.840301   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:07:13.852603   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:13.852616   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:13.876935   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:13.876946   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:13.916017   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:07:13.916037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:07:13.930684   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:07:13.930698   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:07:13.941980   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:07:13.941990   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:13.957122   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:07:13.957132   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:07:13.970671   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:07:13.970685   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:07:16.483926   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:19.329350   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:21.484665   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:21.484804   21713 kubeadm.go:591] duration metric: took 4m3.932349459s to restartPrimaryControlPlane
	W0318 05:07:21.484970   21713 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 05:07:21.485032   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 05:07:22.515182   21713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030168042s)
	I0318 05:07:22.515251   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 05:07:22.520491   21713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:07:22.523476   21713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:07:22.526385   21713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:07:22.526391   21713 kubeadm.go:156] found existing configuration files:
	
	I0318 05:07:22.526417   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf
	I0318 05:07:22.528857   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:07:22.528881   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:07:22.531603   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf
	I0318 05:07:22.534497   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:07:22.534519   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:07:22.537095   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf
	I0318 05:07:22.539699   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:07:22.539720   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:07:22.542891   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf
	I0318 05:07:22.545772   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:07:22.545794   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:07:22.548264   21713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 05:07:22.565193   21713 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 05:07:22.565226   21713 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 05:07:22.616035   21713 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 05:07:22.616093   21713 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 05:07:22.616139   21713 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 05:07:22.664530   21713 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 05:07:22.668715   21713 out.go:204]   - Generating certificates and keys ...
	I0318 05:07:22.668751   21713 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 05:07:22.668785   21713 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 05:07:22.668822   21713 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 05:07:22.668852   21713 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 05:07:22.668886   21713 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 05:07:22.668916   21713 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 05:07:22.668950   21713 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 05:07:22.668985   21713 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 05:07:22.669034   21713 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 05:07:22.669087   21713 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 05:07:22.669106   21713 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 05:07:22.669156   21713 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 05:07:22.783021   21713 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 05:07:22.843599   21713 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 05:07:23.087842   21713 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 05:07:23.188107   21713 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 05:07:23.217640   21713 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 05:07:23.218039   21713 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 05:07:23.218089   21713 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 05:07:23.304008   21713 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 05:07:24.331588   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:24.331698   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:24.343914   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:24.343994   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:24.364134   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:24.364213   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:24.377348   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:24.377438   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:24.389211   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:24.389288   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:24.400906   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:24.400987   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:24.413102   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:24.413186   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:24.424134   21725 logs.go:276] 0 containers: []
	W0318 05:07:24.424147   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:24.424211   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:24.436173   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:24.436192   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:24.436198   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:24.451227   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:24.451240   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:24.466320   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:24.466332   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:24.513129   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:24.513150   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:24.526204   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:24.526216   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:24.542796   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:24.542809   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:24.566712   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:24.566729   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:24.571772   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:24.571782   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:24.588832   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:24.588844   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:24.607190   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:24.607204   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:24.622492   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:24.622508   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:24.634913   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:24.634924   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:24.650431   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:24.650445   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:24.691758   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:24.691776   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:24.729006   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:24.729020   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:24.743698   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:24.743712   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:24.768977   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:24.768991   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:24.786394   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:24.786407   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:24.804334   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:24.804350   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:27.317847   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:23.307244   21713 out.go:204]   - Booting up control plane ...
	I0318 05:07:23.307291   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 05:07:23.307335   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 05:07:23.307372   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 05:07:23.307419   21713 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 05:07:23.307548   21713 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 05:07:28.311017   21713 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.006590 seconds
	I0318 05:07:28.311336   21713 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 05:07:28.322018   21713 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 05:07:28.834451   21713 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 05:07:28.834583   21713 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-211000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 05:07:29.339653   21713 kubeadm.go:309] [bootstrap-token] Using token: zzghot.6ejp1jln0cyhdi5r
	I0318 05:07:29.343508   21713 out.go:204]   - Configuring RBAC rules ...
	I0318 05:07:29.343579   21713 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 05:07:29.343659   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 05:07:29.351009   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 05:07:29.352389   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 05:07:29.353413   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 05:07:29.355738   21713 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 05:07:29.358904   21713 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 05:07:29.542155   21713 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 05:07:29.743884   21713 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 05:07:29.744359   21713 kubeadm.go:309] 
	I0318 05:07:29.744388   21713 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 05:07:29.744396   21713 kubeadm.go:309] 
	I0318 05:07:29.744439   21713 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 05:07:29.744443   21713 kubeadm.go:309] 
	I0318 05:07:29.744454   21713 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 05:07:29.744481   21713 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 05:07:29.744511   21713 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 05:07:29.744515   21713 kubeadm.go:309] 
	I0318 05:07:29.744544   21713 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 05:07:29.744547   21713 kubeadm.go:309] 
	I0318 05:07:29.744573   21713 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 05:07:29.744577   21713 kubeadm.go:309] 
	I0318 05:07:29.744603   21713 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 05:07:29.744642   21713 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 05:07:29.744685   21713 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 05:07:29.744688   21713 kubeadm.go:309] 
	I0318 05:07:29.744727   21713 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 05:07:29.744771   21713 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 05:07:29.744774   21713 kubeadm.go:309] 
	I0318 05:07:29.744818   21713 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zzghot.6ejp1jln0cyhdi5r \
	I0318 05:07:29.744882   21713 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f \
	I0318 05:07:29.744892   21713 kubeadm.go:309] 	--control-plane 
	I0318 05:07:29.744896   21713 kubeadm.go:309] 
	I0318 05:07:29.744939   21713 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 05:07:29.744942   21713 kubeadm.go:309] 
	I0318 05:07:29.744988   21713 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zzghot.6ejp1jln0cyhdi5r \
	I0318 05:07:29.745050   21713 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f 
	I0318 05:07:29.745200   21713 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 05:07:29.745252   21713 cni.go:84] Creating CNI manager for ""
	I0318 05:07:29.745260   21713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:07:29.749249   21713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 05:07:29.756227   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 05:07:29.759604   21713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 05:07:29.764759   21713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 05:07:29.764793   21713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 05:07:29.764841   21713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-211000 minikube.k8s.io/updated_at=2024_03_18T05_07_29_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=stopped-upgrade-211000 minikube.k8s.io/primary=true
	I0318 05:07:29.808156   21713 ops.go:34] apiserver oom_adj: -16
	I0318 05:07:29.813993   21713 kubeadm.go:1107] duration metric: took 49.230958ms to wait for elevateKubeSystemPrivileges
	W0318 05:07:29.814013   21713 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 05:07:29.814017   21713 kubeadm.go:393] duration metric: took 4m12.274897s to StartCluster
	I0318 05:07:29.814026   21713 settings.go:142] acquiring lock: {Name:mkc727ca725e75d24ce65050e373ec9e186fcd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:29.814173   21713 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:07:29.814545   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:29.814747   21713 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:07:29.819168   21713 out.go:177] * Verifying Kubernetes components...
	I0318 05:07:29.814898   21713 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:07:29.814864   21713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 05:07:29.827119   21713 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-211000"
	I0318 05:07:29.827133   21713 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-211000"
	W0318 05:07:29.827136   21713 addons.go:243] addon storage-provisioner should already be in state true
	I0318 05:07:29.827159   21713 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0318 05:07:29.827180   21713 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-211000"
	I0318 05:07:29.827192   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:07:29.827194   21713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-211000"
	I0318 05:07:29.827672   21713 retry.go:31] will retry after 1.430317702s: connect: dial unix /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/monitor: connect: connection refused
	I0318 05:07:29.833210   21713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:07:32.320054   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:32.320238   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:32.335915   21725 logs.go:276] 2 containers: [957651f315c0 d4f26039d08f]
	I0318 05:07:32.336007   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:32.348603   21725 logs.go:276] 2 containers: [a5be5dc1602f fb7044aa6fe8]
	I0318 05:07:32.348670   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:32.365035   21725 logs.go:276] 2 containers: [8001b6be7e31 979957847e88]
	I0318 05:07:32.365104   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:32.375890   21725 logs.go:276] 2 containers: [cffc35d80bf6 82437f53be1f]
	I0318 05:07:32.375965   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:32.386554   21725 logs.go:276] 2 containers: [a9ce4de1a696 eab46fcf2c4f]
	I0318 05:07:32.386622   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:32.397481   21725 logs.go:276] 2 containers: [50247ffa021c 1cf5bd1f2f5d]
	I0318 05:07:32.397556   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:29.837214   21713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:29.837221   21713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 05:07:29.837228   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:07:29.906401   21713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:07:29.911468   21713 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:07:29.911510   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:07:29.915458   21713 api_server.go:72] duration metric: took 100.704333ms to wait for apiserver process to appear ...
	I0318 05:07:29.915465   21713 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:07:29.915472   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:29.924168   21713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:31.261009   21713 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10656aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:07:31.261136   21713 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-211000"
	W0318 05:07:31.261142   21713 addons.go:243] addon default-storageclass should already be in state true
	I0318 05:07:31.261154   21713 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0318 05:07:31.261867   21713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:31.261873   21713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 05:07:31.261879   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:07:31.299489   21713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:32.408204   21725 logs.go:276] 0 containers: []
	W0318 05:07:32.408216   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:32.408276   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:32.419058   21725 logs.go:276] 2 containers: [46cde0409174 81416833671d]
	I0318 05:07:32.419074   21725 logs.go:123] Gathering logs for kube-scheduler [82437f53be1f] ...
	I0318 05:07:32.419080   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82437f53be1f"
	I0318 05:07:32.435880   21725 logs.go:123] Gathering logs for kube-proxy [eab46fcf2c4f] ...
	I0318 05:07:32.435891   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eab46fcf2c4f"
	I0318 05:07:32.447904   21725 logs.go:123] Gathering logs for storage-provisioner [81416833671d] ...
	I0318 05:07:32.447916   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81416833671d"
	I0318 05:07:32.464424   21725 logs.go:123] Gathering logs for kube-apiserver [957651f315c0] ...
	I0318 05:07:32.464437   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 957651f315c0"
	I0318 05:07:32.478505   21725 logs.go:123] Gathering logs for kube-apiserver [d4f26039d08f] ...
	I0318 05:07:32.478515   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f26039d08f"
	I0318 05:07:32.517518   21725 logs.go:123] Gathering logs for etcd [a5be5dc1602f] ...
	I0318 05:07:32.517534   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5be5dc1602f"
	I0318 05:07:32.531534   21725 logs.go:123] Gathering logs for etcd [fb7044aa6fe8] ...
	I0318 05:07:32.531544   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7044aa6fe8"
	I0318 05:07:32.545879   21725 logs.go:123] Gathering logs for coredns [8001b6be7e31] ...
	I0318 05:07:32.545889   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8001b6be7e31"
	I0318 05:07:32.557215   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:32.557227   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:32.579247   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:32.579258   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:32.584409   21725 logs.go:123] Gathering logs for coredns [979957847e88] ...
	I0318 05:07:32.584417   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 979957847e88"
	I0318 05:07:32.596128   21725 logs.go:123] Gathering logs for kube-scheduler [cffc35d80bf6] ...
	I0318 05:07:32.596138   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cffc35d80bf6"
	I0318 05:07:32.608050   21725 logs.go:123] Gathering logs for kube-controller-manager [50247ffa021c] ...
	I0318 05:07:32.608059   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50247ffa021c"
	I0318 05:07:32.626122   21725 logs.go:123] Gathering logs for storage-provisioner [46cde0409174] ...
	I0318 05:07:32.626134   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46cde0409174"
	I0318 05:07:32.639528   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:32.639540   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:32.675430   21725 logs.go:123] Gathering logs for kube-proxy [a9ce4de1a696] ...
	I0318 05:07:32.675442   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9ce4de1a696"
	I0318 05:07:32.688633   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:07:32.688644   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:32.701508   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:32.701519   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:32.740384   21725 logs.go:123] Gathering logs for kube-controller-manager [1cf5bd1f2f5d] ...
	I0318 05:07:32.740398   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf5bd1f2f5d"
	I0318 05:07:35.266293   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:34.917400   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:34.917420   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:40.268369   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:40.268414   21725 kubeadm.go:591] duration metric: took 4m6.421998042s to restartPrimaryControlPlane
	W0318 05:07:40.268451   21725 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 05:07:40.268470   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 05:07:41.317558   21725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.049109833s)
	I0318 05:07:41.317641   21725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 05:07:41.322444   21725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:07:41.325332   21725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:07:41.328075   21725 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:07:41.328084   21725 kubeadm.go:156] found existing configuration files:
	
	I0318 05:07:41.328103   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf
	I0318 05:07:41.331189   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:07:41.331216   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:07:41.333989   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf
	I0318 05:07:41.336406   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:07:41.336427   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:07:41.339382   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf
	I0318 05:07:41.342056   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:07:41.342079   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:07:41.344510   21725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf
	I0318 05:07:41.347467   21725 kubeadm.go:162] "https://control-plane.minikube.internal:54379" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54379 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:07:41.347490   21725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:07:41.350616   21725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 05:07:41.366858   21725 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 05:07:41.366896   21725 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 05:07:41.414505   21725 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 05:07:41.414662   21725 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 05:07:41.414712   21725 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 05:07:41.466127   21725 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 05:07:41.470305   21725 out.go:204]   - Generating certificates and keys ...
	I0318 05:07:41.470337   21725 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 05:07:41.470368   21725 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 05:07:41.470400   21725 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 05:07:41.470426   21725 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 05:07:41.470461   21725 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 05:07:41.470492   21725 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 05:07:41.470521   21725 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 05:07:41.470548   21725 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 05:07:41.470585   21725 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 05:07:41.470624   21725 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 05:07:41.470647   21725 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 05:07:41.470673   21725 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 05:07:41.560038   21725 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 05:07:41.794443   21725 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 05:07:42.026427   21725 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 05:07:42.180444   21725 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 05:07:42.210309   21725 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 05:07:42.212157   21725 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 05:07:42.212180   21725 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 05:07:42.301278   21725 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 05:07:42.304045   21725 out.go:204]   - Booting up control plane ...
	I0318 05:07:42.304087   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 05:07:42.304127   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 05:07:42.304164   21725 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 05:07:42.304204   21725 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 05:07:42.304281   21725 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 05:07:39.917450   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:39.917475   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:47.308751   21725 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.005306 seconds
	I0318 05:07:47.308856   21725 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 05:07:47.315306   21725 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 05:07:44.917604   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:44.917631   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:47.823437   21725 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 05:07:47.823527   21725 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-349000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 05:07:48.327707   21725 kubeadm.go:309] [bootstrap-token] Using token: d44j0d.tbclig13jiu1wa7k
	I0318 05:07:48.333908   21725 out.go:204]   - Configuring RBAC rules ...
	I0318 05:07:48.333978   21725 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 05:07:48.334027   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 05:07:48.338310   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 05:07:48.339177   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 05:07:48.339884   21725 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 05:07:48.340795   21725 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 05:07:48.343957   21725 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 05:07:48.522032   21725 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 05:07:48.731633   21725 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 05:07:48.732225   21725 kubeadm.go:309] 
	I0318 05:07:48.732265   21725 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 05:07:48.732270   21725 kubeadm.go:309] 
	I0318 05:07:48.732317   21725 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 05:07:48.732320   21725 kubeadm.go:309] 
	I0318 05:07:48.732333   21725 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 05:07:48.732364   21725 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 05:07:48.732392   21725 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 05:07:48.732397   21725 kubeadm.go:309] 
	I0318 05:07:48.732433   21725 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 05:07:48.732436   21725 kubeadm.go:309] 
	I0318 05:07:48.732463   21725 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 05:07:48.732469   21725 kubeadm.go:309] 
	I0318 05:07:48.732497   21725 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 05:07:48.732534   21725 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 05:07:48.732585   21725 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 05:07:48.732589   21725 kubeadm.go:309] 
	I0318 05:07:48.732633   21725 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 05:07:48.732680   21725 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 05:07:48.732684   21725 kubeadm.go:309] 
	I0318 05:07:48.732734   21725 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d44j0d.tbclig13jiu1wa7k \
	I0318 05:07:48.732792   21725 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f \
	I0318 05:07:48.732804   21725 kubeadm.go:309] 	--control-plane 
	I0318 05:07:48.732806   21725 kubeadm.go:309] 
	I0318 05:07:48.732851   21725 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 05:07:48.732859   21725 kubeadm.go:309] 
	I0318 05:07:48.732900   21725 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d44j0d.tbclig13jiu1wa7k \
	I0318 05:07:48.732957   21725 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f 
	I0318 05:07:48.733008   21725 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 05:07:48.733014   21725 cni.go:84] Creating CNI manager for ""
	I0318 05:07:48.733022   21725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:07:48.736976   21725 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 05:07:48.743922   21725 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 05:07:48.748027   21725 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 05:07:48.753410   21725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 05:07:48.753469   21725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 05:07:48.753486   21725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-349000 minikube.k8s.io/updated_at=2024_03_18T05_07_48_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=running-upgrade-349000 minikube.k8s.io/primary=true
	I0318 05:07:48.807022   21725 kubeadm.go:1107] duration metric: took 53.60575ms to wait for elevateKubeSystemPrivileges
	I0318 05:07:48.807041   21725 ops.go:34] apiserver oom_adj: -16
	W0318 05:07:48.807054   21725 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 05:07:48.807057   21725 kubeadm.go:393] duration metric: took 4m14.974716291s to StartCluster
	I0318 05:07:48.807067   21725 settings.go:142] acquiring lock: {Name:mkc727ca725e75d24ce65050e373ec9e186fcd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:48.807151   21725 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:07:48.807588   21725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:48.807774   21725 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:07:48.811852   21725 out.go:177] * Verifying Kubernetes components...
	I0318 05:07:48.807862   21725 config.go:182] Loaded profile config "running-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:07:48.807831   21725 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 05:07:48.819850   21725 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-349000"
	I0318 05:07:48.819856   21725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:07:48.819864   21725 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-349000"
	W0318 05:07:48.819871   21725 addons.go:243] addon storage-provisioner should already be in state true
	I0318 05:07:48.819888   21725 host.go:66] Checking if "running-upgrade-349000" exists ...
	I0318 05:07:48.819888   21725 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-349000"
	I0318 05:07:48.819917   21725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-349000"
	I0318 05:07:48.820139   21725 retry.go:31] will retry after 1.334452348s: connect: dial unix /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/monitor: connect: connection refused
	I0318 05:07:48.820995   21725 kapi.go:59] client config for running-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/running-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10578ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:07:48.821111   21725 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-349000"
	W0318 05:07:48.821116   21725 addons.go:243] addon default-storageclass should already be in state true
	I0318 05:07:48.821123   21725 host.go:66] Checking if "running-upgrade-349000" exists ...
	I0318 05:07:48.821788   21725 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:48.821793   21725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 05:07:48.821798   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:07:48.908828   21725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:07:48.914015   21725 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:07:48.914060   21725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:07:48.917765   21725 api_server.go:72] duration metric: took 109.983958ms to wait for apiserver process to appear ...
	I0318 05:07:48.917774   21725 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:07:48.917780   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:48.931465   21725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:50.162144   21725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:07:50.166171   21725 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:50.166183   21725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 05:07:50.166200   21725 sshutil.go:53] new ssh client: &{IP:localhost Port:54315 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/running-upgrade-349000/id_rsa Username:docker}
	I0318 05:07:50.208522   21725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:49.917847   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:49.917893   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:53.919691   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:53.919734   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:54.918235   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:54.918291   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:59.918886   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:59.918916   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 05:08:01.350791   21713 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 05:08:01.354672   21713 out.go:177] * Enabled addons: storage-provisioner
	I0318 05:07:58.920023   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:58.920051   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:01.362510   21713 addons.go:505] duration metric: took 31.548730459s for enable addons: enabled=[storage-provisioner]
	I0318 05:08:03.920241   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:03.920270   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:04.919864   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:04.919886   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:08.920578   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:08.920606   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:09.920816   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:09.920849   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:13.921005   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:13.921037   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:14.922085   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:14.922114   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:18.921625   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:18.921645   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 05:08:19.242942   21725 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 05:08:19.247039   21725 out.go:177] * Enabled addons: storage-provisioner
	I0318 05:08:19.254814   21725 addons.go:505] duration metric: took 30.447965334s for enable addons: enabled=[storage-provisioner]
	I0318 05:08:19.923657   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:19.923682   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:23.922802   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:23.922847   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:24.925629   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:24.925649   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:28.924116   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:28.924168   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:29.927625   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:29.927721   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:29.938276   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:08:29.938344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:29.948503   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:08:29.948575   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:29.958717   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:08:29.958781   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:29.968426   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:08:29.968492   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:29.978685   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:08:29.978754   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:29.988589   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:08:29.988661   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:29.998984   21713 logs.go:276] 0 containers: []
	W0318 05:08:29.998995   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:29.999057   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:30.009936   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:08:30.009951   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:08:30.009956   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:08:30.021521   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:30.021532   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:30.026487   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:08:30.026494   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:08:30.040394   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:08:30.040404   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:08:30.054820   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:08:30.054830   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:08:30.066859   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:08:30.066869   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:08:30.088250   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:30.088261   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:30.112930   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:08:30.112937   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:30.124212   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:30.124224   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:08:30.159600   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:30.159692   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:30.161700   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:30.161705   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:30.197214   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:08:30.197225   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:08:30.210826   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:08:30.210836   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:08:30.226770   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:08:30.226780   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:08:30.238215   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:30.238225   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:08:30.238252   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:08:30.238256   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:30.238261   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:30.238266   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:30.238269   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:08:33.925777   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:33.925813   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:38.927705   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:38.927745   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:40.242106   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:43.927911   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:43.927936   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:45.244304   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:45.244546   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:45.273700   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:08:45.273827   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:45.291182   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:08:45.291263   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:45.305127   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:08:45.305205   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:45.323236   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:08:45.323308   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:45.334382   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:08:45.334450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:45.345033   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:08:45.345106   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:45.355596   21713 logs.go:276] 0 containers: []
	W0318 05:08:45.355616   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:45.355675   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:45.366029   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:08:45.366045   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:45.366051   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:45.390326   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:08:45.390335   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:45.401727   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:45.401738   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:08:45.437048   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:45.437142   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:45.439210   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:08:45.439214   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:08:45.454091   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:08:45.454104   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:08:45.466782   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:08:45.466793   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:08:45.482082   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:08:45.482093   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:08:45.498838   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:08:45.498847   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:08:45.510210   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:45.510221   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:45.514752   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:45.514759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:45.551110   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:08:45.551121   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:08:45.565945   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:08:45.565956   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:08:45.579792   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:08:45.579803   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:08:45.595707   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:45.595717   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:08:45.595743   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:08:45.595748   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:45.595753   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:45.595759   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:45.595762   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:08:48.929970   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:48.930136   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:48.940991   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:08:48.941071   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:48.951503   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:08:48.951575   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:48.961707   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:08:48.961768   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:48.972078   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:08:48.972148   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:48.982517   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:08:48.982592   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:48.992856   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:08:48.992916   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:49.003232   21725 logs.go:276] 0 containers: []
	W0318 05:08:49.003244   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:49.003302   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:49.014032   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:08:49.014046   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:08:49.014053   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:08:49.028570   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:08:49.028582   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:08:49.042512   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:08:49.042526   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:08:49.058927   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:49.058941   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:08:49.091884   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:49.091895   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:49.096360   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:08:49.096370   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:08:49.110429   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:08:49.110439   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:08:49.124760   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:08:49.124771   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:08:49.142013   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:49.142022   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:49.166925   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:08:49.166932   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:49.178460   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:49.178472   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:49.219229   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:08:49.219239   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:08:49.230619   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:08:49.230633   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:08:51.743804   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:56.746029   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:56.746192   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:56.764260   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:08:56.764355   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:56.778091   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:08:56.778168   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:56.789515   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:08:56.789584   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:56.800587   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:08:56.800657   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:56.815602   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:08:56.815676   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:56.826393   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:08:56.826460   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:56.836345   21725 logs.go:276] 0 containers: []
	W0318 05:08:56.836356   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:56.836416   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:56.849474   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:08:56.849490   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:08:56.849495   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:08:56.863003   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:08:56.863013   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:08:56.874901   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:08:56.874912   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:08:56.886351   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:08:56.886360   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:08:56.901060   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:08:56.901074   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:08:56.912862   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:56.912873   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:56.937311   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:56.937322   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:56.971691   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:08:56.971702   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:08:56.986665   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:08:56.986676   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:08:57.005194   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:08:57.005206   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:08:57.017165   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:08:57.017177   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:57.029461   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:57.029472   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:08:57.065265   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:57.065278   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:55.599578   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:59.569640   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:00.601782   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:00.601973   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:00.627328   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:00.627439   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:00.642545   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:00.642620   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:00.654662   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:00.654743   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:00.665349   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:00.665425   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:00.675253   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:00.675334   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:00.685887   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:00.685960   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:00.695958   21713 logs.go:276] 0 containers: []
	W0318 05:09:00.695970   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:00.696031   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:00.706689   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:00.706709   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:00.706715   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:00.711452   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:00.711462   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:00.722788   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:00.722797   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:00.734026   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:00.734037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:00.745703   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:00.745715   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:00.762757   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:00.762766   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:00.774626   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:00.774638   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:00.799808   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:00.799820   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:00.811573   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:00.811583   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:00.847904   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:00.847999   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:00.850002   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:00.850007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:00.885096   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:00.885107   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:00.899847   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:00.899858   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:00.913658   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:00.913668   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:00.927496   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:00.927509   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:00.927533   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:09:00.927541   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:00.927545   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:00.927550   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:00.927553   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:04.571829   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:04.572057   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:04.602961   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:04.603056   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:04.621272   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:04.621346   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:04.636105   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:04.636172   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:04.646554   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:04.646616   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:04.656942   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:04.657007   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:04.667548   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:04.667608   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:04.677680   21725 logs.go:276] 0 containers: []
	W0318 05:09:04.677690   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:04.677743   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:04.688137   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:04.688153   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:04.688158   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:04.762837   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:04.762849   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:04.777199   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:04.777211   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:04.788454   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:04.788465   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:04.799624   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:04.799639   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:04.811738   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:04.811750   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:04.826260   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:04.826274   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:04.849098   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:04.849107   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:04.882593   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:04.882603   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:04.887606   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:04.887615   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:04.902393   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:04.902404   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:04.917611   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:04.917621   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:04.929556   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:04.929567   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:07.453676   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:10.931069   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:12.455747   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:12.455908   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:12.468591   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:12.468668   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:12.479107   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:12.479172   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:12.489667   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:12.489745   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:12.499750   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:12.499819   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:12.510039   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:12.510114   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:12.520951   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:12.521018   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:12.531528   21725 logs.go:276] 0 containers: []
	W0318 05:09:12.531542   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:12.531603   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:12.542830   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:12.542847   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:12.542854   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:12.579418   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:12.579432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:12.593741   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:12.593752   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:12.610900   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:12.610914   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:12.635306   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:12.635318   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:12.641477   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:12.641490   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:12.656594   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:12.656608   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:12.668599   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:12.668613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:12.681107   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:12.681120   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:12.693904   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:12.693914   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:12.711907   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:12.711921   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:12.724025   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:12.724037   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:12.735712   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:12.735725   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:15.272755   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:15.933250   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:15.933408   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:15.953516   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:15.953601   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:15.966065   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:15.966139   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:15.976983   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:15.977052   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:15.986945   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:15.987007   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:15.998041   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:15.998117   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:16.009670   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:16.009738   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:16.025192   21713 logs.go:276] 0 containers: []
	W0318 05:09:16.025208   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:16.025268   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:16.035536   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:16.035552   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:16.035558   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:16.072240   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:16.072336   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:16.074352   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:16.074358   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:16.089532   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:16.089542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:16.114184   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:16.114193   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:16.128634   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:16.128649   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:16.140227   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:16.140238   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:16.157488   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:16.157498   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:16.161914   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:16.161920   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:16.198373   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:16.198384   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:16.215360   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:16.215369   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:16.229305   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:16.229321   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:16.240760   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:16.240772   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:16.252397   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:16.252409   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:16.264152   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:16.264167   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:16.264197   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:09:16.264201   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:16.264205   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:16.264209   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:16.264212   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:20.274897   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:20.275060   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:20.289822   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:20.289898   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:20.301831   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:20.301901   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:20.319130   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:20.319195   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:20.334288   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:20.334357   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:20.344230   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:20.344293   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:20.357504   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:20.357571   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:20.367296   21725 logs.go:276] 0 containers: []
	W0318 05:09:20.367309   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:20.367367   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:20.377613   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:20.377628   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:20.377634   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:20.410353   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:20.410362   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:20.415255   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:20.415263   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:20.429586   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:20.429598   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:20.445501   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:20.445512   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:20.457071   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:20.457083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:20.480201   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:20.480208   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:20.516738   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:20.516751   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:20.531569   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:20.531583   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:20.543048   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:20.543061   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:20.555001   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:20.555013   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:20.570458   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:20.570469   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:20.595542   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:20.595557   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:23.110634   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:26.268064   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:28.112725   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:28.112878   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:28.127955   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:28.128049   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:28.140109   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:28.140189   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:28.150836   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:28.150904   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:28.163217   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:28.163281   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:28.173885   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:28.173962   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:28.188576   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:28.188634   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:28.198601   21725 logs.go:276] 0 containers: []
	W0318 05:09:28.198613   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:28.198666   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:28.209182   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:28.209198   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:28.209204   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:28.245042   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:28.245055   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:28.259403   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:28.259416   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:28.277429   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:28.277440   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:28.289258   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:28.289269   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:28.323052   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:28.323061   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:28.327566   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:28.327574   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:28.339205   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:28.339217   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:28.355929   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:28.355939   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:28.367629   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:28.367643   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:28.378606   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:28.378616   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:28.403520   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:28.403529   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:28.417415   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:28.417425   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:30.930963   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:31.270242   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:31.270346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:31.283179   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:31.283252   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:31.294119   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:31.294182   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:31.305477   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:31.305545   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:31.315934   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:31.316008   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:31.326324   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:31.326391   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:31.336478   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:31.336542   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:31.346540   21713 logs.go:276] 0 containers: []
	W0318 05:09:31.346550   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:31.346601   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:31.357812   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:31.357829   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:31.357834   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:31.374614   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:31.374624   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:31.389412   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:31.389428   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:31.405927   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:31.405938   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:31.418162   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:31.418175   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:31.429814   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:31.429824   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:31.447785   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:31.447796   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:31.482257   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:31.482352   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:31.484441   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:31.484451   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:31.488229   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:31.488236   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:31.528283   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:31.528295   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:31.544170   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:31.544180   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:31.560845   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:31.560855   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:31.584276   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:31.584285   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:31.600505   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:31.600515   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:31.600542   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:09:31.600547   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:31.600550   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:31.600555   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:31.600558   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:35.933115   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:35.933286   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:35.945907   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:35.945980   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:35.961313   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:35.961381   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:35.971983   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:35.972054   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:35.982542   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:35.982618   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:35.992632   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:35.992707   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:36.002763   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:36.002833   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:36.016878   21725 logs.go:276] 0 containers: []
	W0318 05:09:36.016890   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:36.016952   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:36.026877   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:36.026894   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:36.026899   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:36.044915   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:36.044924   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:36.057084   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:36.057099   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:36.091872   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:36.091882   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:36.096485   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:36.096492   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:36.134351   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:36.134365   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:36.146025   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:36.146036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:36.158545   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:36.158554   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:36.170656   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:36.170667   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:36.194855   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:36.194863   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:36.209872   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:36.209881   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:36.223514   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:36.223524   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:36.234922   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:36.234931   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:38.751288   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:41.603094   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:43.753452   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:43.753632   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:43.771060   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:43.771163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:43.789681   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:43.789773   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:43.800487   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:43.800572   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:43.810804   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:43.810872   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:43.820911   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:43.820992   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:43.831215   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:43.831296   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:43.841415   21725 logs.go:276] 0 containers: []
	W0318 05:09:43.841430   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:43.841504   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:43.851702   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:43.851717   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:43.851724   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:43.862808   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:43.862818   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:43.867299   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:43.867307   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:43.902270   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:43.902283   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:43.916912   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:43.916923   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:43.928568   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:43.928580   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:43.952381   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:43.952391   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:43.969450   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:43.969461   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:43.981174   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:43.981185   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:44.015421   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:44.015432   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:44.034814   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:44.034828   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:44.046685   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:44.046697   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:44.058438   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:44.058448   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:46.575286   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:46.605220   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:46.605386   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:46.632813   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:46.632907   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:46.649666   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:46.649751   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:46.663435   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:09:46.663511   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:46.678122   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:46.678196   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:46.688532   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:46.688607   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:46.700795   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:46.700865   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:46.712050   21713 logs.go:276] 0 containers: []
	W0318 05:09:46.712062   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:46.712123   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:46.722977   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:46.722996   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:46.723001   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:46.759148   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:46.759244   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:46.761258   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:46.761262   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:46.765402   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:46.765411   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:46.783025   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:46.783037   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:46.807098   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:46.807106   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:46.818629   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:09:46.818642   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:09:46.830166   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:46.830176   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:46.864236   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:46.864246   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:46.878273   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:46.878284   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:46.890115   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:46.890126   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:46.901919   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:46.901931   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:46.913974   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:46.913986   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:46.929091   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:09:46.929102   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:09:46.940445   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:46.940456   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:46.958782   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:46.958793   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:46.970343   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:46.970354   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:46.970380   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:09:46.970386   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:46.970391   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:46.970394   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:46.970398   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:51.577552   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:51.577778   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:51.592871   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:51.592954   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:51.604009   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:51.604074   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:51.614813   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:51.614878   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:51.625644   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:51.625720   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:51.636691   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:51.636762   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:51.647455   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:51.647527   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:51.657668   21725 logs.go:276] 0 containers: []
	W0318 05:09:51.657679   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:51.657744   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:51.668759   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:51.668774   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:51.668779   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:51.680401   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:51.680410   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:09:51.694699   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:51.694710   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:51.717496   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:51.717506   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:51.729073   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:51.729087   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:51.752625   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:51.752635   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:51.756820   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:51.756828   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:51.792620   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:51.792630   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:51.803942   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:51.803953   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:51.816406   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:51.816418   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:51.828072   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:51.828083   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:51.862237   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:51.862247   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:51.882525   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:51.882536   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:54.398692   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:56.974194   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:59.399398   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:59.399577   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:59.422057   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:09:59.422152   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:59.437304   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:09:59.437386   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:59.450449   21725 logs.go:276] 2 containers: [27b1f1e110ed 7ab83b2ece4e]
	I0318 05:09:59.450520   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:59.471961   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:09:59.472028   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:59.484668   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:09:59.484740   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:59.509226   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:09:59.509280   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:59.532407   21725 logs.go:276] 0 containers: []
	W0318 05:09:59.532419   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:59.532478   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:59.546717   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:09:59.546738   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:59.546744   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:59.573709   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:09:59.573730   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:59.595520   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:59.595533   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:09:59.637953   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:59.637967   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:59.642834   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:09:59.642842   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:09:59.654339   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:09:59.654351   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:09:59.672114   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:09:59.672127   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:09:59.685711   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:09:59.685722   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:09:59.696831   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:59.696842   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:59.760248   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:09:59.760261   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:09:59.790601   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:09:59.790616   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:09:59.809662   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:09:59.809675   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:09:59.821845   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:09:59.821860   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:02.338642   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:01.976265   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:01.976346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:01.986966   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:01.987042   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:01.998650   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:01.998717   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:02.009081   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:02.009157   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:02.020418   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:02.020490   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:02.030533   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:02.030622   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:02.041313   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:02.041374   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:02.051642   21713 logs.go:276] 0 containers: []
	W0318 05:10:02.051653   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:02.051709   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:02.062387   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:02.062408   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:02.062414   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:02.074113   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:02.074124   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:02.110503   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:02.110597   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:02.112711   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:02.112717   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:02.124098   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:02.124109   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:02.135448   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:02.135457   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:02.149537   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:02.149547   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:02.161938   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:02.161949   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:02.176381   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:02.176391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:02.187939   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:02.187948   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:02.192316   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:02.192322   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:02.228899   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:02.228909   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:02.246662   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:02.246672   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:02.271529   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:02.271537   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:02.284598   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:02.284612   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:02.299368   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:02.299379   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:02.314977   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:02.314989   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:02.315018   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:10:02.315022   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:02.315026   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:02.315030   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:02.315033   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:07.340807   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:07.341163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:07.379193   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:07.379324   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:07.398312   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:07.400873   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:07.420826   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:07.420905   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:07.434257   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:07.434329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:07.445173   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:07.445240   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:07.456332   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:07.456397   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:07.467719   21725 logs.go:276] 0 containers: []
	W0318 05:10:07.467730   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:07.467793   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:07.481916   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:07.481935   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:07.481941   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:07.496369   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:07.496380   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:07.507590   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:07.507603   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:07.520675   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:07.520686   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:07.532422   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:07.532438   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:07.549545   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:07.549556   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:07.554658   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:07.554666   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:07.566263   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:07.566274   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:07.578597   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:07.578608   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:07.591858   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:07.591869   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:07.628161   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:07.628175   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:07.639810   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:07.639822   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:07.653899   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:07.653910   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:07.668583   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:07.668594   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:07.692098   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:07.692106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:10.226758   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:12.317924   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:15.228948   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:15.229219   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:15.253928   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:15.254053   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:15.270773   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:15.270862   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:15.284584   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:15.284650   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:15.295847   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:15.295912   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:15.310703   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:15.310772   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:15.323527   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:15.323597   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:15.333800   21725 logs.go:276] 0 containers: []
	W0318 05:10:15.333809   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:15.333874   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:15.344844   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:15.344864   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:15.344871   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:15.350735   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:15.350742   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:15.362303   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:15.362315   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:15.373695   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:15.373708   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:15.408436   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:15.408450   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:15.428402   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:15.428414   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:15.443003   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:15.443013   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:15.467942   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:15.467957   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:15.482213   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:15.482225   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:15.493613   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:15.493626   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:15.506352   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:15.506364   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:15.518586   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:15.518597   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:15.530328   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:15.530338   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:15.548213   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:15.548224   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:15.560159   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:15.560171   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:17.320113   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:17.320355   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:17.347657   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:17.347775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:17.366444   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:17.366530   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:17.380247   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:17.380325   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:17.391739   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:17.391810   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:17.406104   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:17.406177   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:17.418306   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:17.418374   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:17.428486   21713 logs.go:276] 0 containers: []
	W0318 05:10:17.428498   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:17.428561   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:17.438549   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:17.438567   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:17.438574   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:17.469575   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:17.469588   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:17.491006   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:17.491018   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:17.495381   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:17.495391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:17.509799   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:17.509810   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:17.521593   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:17.521602   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:17.533071   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:17.533081   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:17.555976   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:17.555988   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:17.581165   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:17.581175   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:17.618076   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:17.618086   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:17.632032   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:17.632043   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:17.646392   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:17.646401   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:17.657949   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:17.657960   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:17.670100   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:17.670111   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:17.704733   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:17.704827   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:17.706937   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:17.706942   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:17.718497   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:17.718507   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:17.718537   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:10:17.718543   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:17.718553   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:17.718559   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:17.718564   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:18.102023   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:23.104075   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:23.104233   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:23.117292   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:23.117373   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:23.128421   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:23.128486   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:23.138529   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:23.138602   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:23.148944   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:23.149016   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:23.161850   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:23.161917   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:23.172318   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:23.172391   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:23.187178   21725 logs.go:276] 0 containers: []
	W0318 05:10:23.187194   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:23.187259   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:23.197533   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:23.197549   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:23.197555   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:23.212997   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:23.213009   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:23.238125   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:23.238132   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:23.252388   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:23.252399   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:23.264506   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:23.264517   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:23.275937   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:23.275948   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:23.287926   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:23.287937   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:23.321365   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:23.321375   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:23.335111   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:23.335123   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:23.346475   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:23.346491   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:23.358485   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:23.358496   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:23.370001   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:23.370013   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:23.374552   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:23.374557   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:23.409889   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:23.409902   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:23.421736   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:23.421748   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:25.941125   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:27.722385   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:30.943298   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:30.943427   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:30.955559   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:30.955634   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:30.965822   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:30.965894   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:30.980233   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:30.980303   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:30.990802   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:30.990876   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:31.001684   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:31.001752   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:31.012576   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:31.012645   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:31.022648   21725 logs.go:276] 0 containers: []
	W0318 05:10:31.022659   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:31.022720   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:31.042890   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:31.042908   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:31.042915   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:31.054676   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:31.054687   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:31.066097   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:31.066106   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:31.079656   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:31.079668   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:31.097108   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:31.097118   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:31.108913   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:31.108926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:31.142623   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:31.142632   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:31.147075   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:31.147084   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:31.158593   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:31.158607   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:31.179725   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:31.179738   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:31.191304   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:31.191314   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:31.205915   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:31.205926   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:31.241103   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:31.241115   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:31.255677   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:31.255689   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:31.271062   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:31.271073   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:32.723225   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:32.723351   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:32.737560   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:32.737629   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:32.749058   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:32.749131   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:32.759500   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:32.759576   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:32.771997   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:32.772075   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:32.782719   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:32.782786   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:32.792795   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:32.792883   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:33.797423   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:32.803592   21713 logs.go:276] 0 containers: []
	W0318 05:10:32.803604   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:32.803662   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:32.818494   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:32.818511   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:32.818516   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:32.830662   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:32.830673   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:32.834960   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:32.834968   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:32.850057   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:32.850081   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:32.861757   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:32.861767   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:32.873525   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:32.873534   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:32.897388   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:32.897395   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:32.908866   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:32.908877   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:32.945259   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:32.945353   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:32.947355   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:32.947359   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:32.958578   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:32.958589   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:32.973280   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:32.973292   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:32.990586   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:32.990597   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:33.030224   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:33.030234   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:33.042820   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:33.042832   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:33.064034   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:33.064045   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:33.078493   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:33.078503   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:33.078532   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:10:33.078536   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:33.078539   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:33.078543   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:33.078547   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:38.798715   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:38.798925   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:38.826853   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:38.826933   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:38.838404   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:38.838483   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:38.849198   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:38.849266   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:38.863716   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:38.863784   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:38.874097   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:38.874163   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:38.885212   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:38.885274   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:38.895652   21725 logs.go:276] 0 containers: []
	W0318 05:10:38.895663   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:38.895716   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:38.906970   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:38.906988   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:38.906993   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:38.919139   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:38.919149   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:38.930599   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:38.930613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:38.941943   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:38.941976   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:38.976707   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:38.976719   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:38.981732   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:38.981741   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:39.017377   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:39.017391   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:39.031766   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:39.031778   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:39.049522   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:39.049535   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:39.061145   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:39.061156   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:39.076097   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:39.076106   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:39.088058   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:39.088073   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:39.113136   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:39.113145   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:39.127049   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:39.127063   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:39.138095   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:39.138105   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:41.651138   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:46.653191   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:46.653344   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:46.664177   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:46.664263   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:46.674805   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:46.674880   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:46.685623   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:46.685700   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:46.696208   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:46.696273   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:46.707014   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:46.707100   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:46.718079   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:46.718144   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:46.727773   21725 logs.go:276] 0 containers: []
	W0318 05:10:46.727783   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:46.727839   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:46.737943   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:46.737958   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:46.737963   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:46.742614   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:46.742624   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:46.754025   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:46.754036   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:46.765285   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:46.765297   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:46.777190   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:46.777201   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:46.789080   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:46.789094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:46.801054   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:46.801066   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:46.836656   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:46.836888   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:46.852822   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:46.852839   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:46.864732   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:46.864746   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:46.890496   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:46.890511   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:46.902174   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:46.902185   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:46.937571   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:46.937585   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:46.952086   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:46.952099   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:46.966728   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:46.966747   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:43.082363   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:49.492148   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:48.084434   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:48.084638   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:48.102594   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:48.102678   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:48.118812   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:48.118888   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:48.130036   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:48.130113   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:48.140750   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:48.140821   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:48.155611   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:48.155676   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:48.166329   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:48.166398   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:48.176047   21713 logs.go:276] 0 containers: []
	W0318 05:10:48.176060   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:48.176120   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:48.186490   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:48.186509   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:48.186515   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:48.190799   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:48.190808   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:48.217430   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:48.217440   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:48.229494   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:48.229503   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:48.253666   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:48.253674   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:48.288513   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:48.288523   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:48.300669   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:48.300679   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:48.312390   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:48.312400   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:48.330724   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:48.330735   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:48.367266   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:48.367364   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:48.369507   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:48.369512   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:48.383667   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:48.383677   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:48.397640   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:48.397651   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:48.409484   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:48.409500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:48.421101   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:48.421111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:48.432746   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:48.432759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:48.444460   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:48.444471   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:48.444501   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:10:48.444506   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:48.444509   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:48.444513   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:48.444517   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:54.492449   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:54.492703   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:54.519252   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:10:54.519376   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:54.536522   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:10:54.536597   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:54.550165   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:10:54.550246   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:54.561451   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:10:54.561521   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:54.571824   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:10:54.571893   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:54.582460   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:10:54.582538   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:54.592021   21725 logs.go:276] 0 containers: []
	W0318 05:10:54.592038   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:54.592090   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:54.604302   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:10:54.604320   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:10:54.604325   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:10:54.618791   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:10:54.618804   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:10:54.634428   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:54.634440   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:10:54.671704   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:10:54.671715   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:10:54.683445   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:10:54.683458   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:10:54.697715   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:54.697727   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:54.721947   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:54.721957   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:54.762121   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:10:54.762137   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:10:54.774601   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:10:54.774613   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:10:54.790792   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:10:54.790803   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:10:54.803422   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:10:54.803433   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:54.815213   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:10:54.815224   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:10:54.829571   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:10:54.829584   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:10:54.841558   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:10:54.841573   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:10:54.866729   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:54.866744   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:57.372929   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:02.373811   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:02.374044   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:58.446255   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:02.397623   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:02.398816   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:02.414570   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:02.414649   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:02.427583   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:02.427658   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:02.438820   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:02.438877   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:02.448902   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:02.448961   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:02.459389   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:02.459461   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:02.469606   21725 logs.go:276] 0 containers: []
	W0318 05:11:02.469618   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:02.469674   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:02.481204   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:02.481226   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:02.481232   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:02.486245   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:02.486252   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:02.503931   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:02.503943   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:02.531726   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:02.531739   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:02.564739   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:02.564747   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:02.576820   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:02.576831   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:02.588982   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:02.588997   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:02.602566   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:02.602578   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:02.614574   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:02.614587   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:02.636273   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:02.636288   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:02.647509   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:02.647524   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:02.659053   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:02.659069   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:02.673208   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:02.673219   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:02.690983   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:02.690996   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:02.715849   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:02.715861   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:05.252486   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:03.446088   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:03.446324   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:03.475812   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:11:03.475934   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:03.497296   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:11:03.497377   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:03.512265   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:11:03.512346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:03.523662   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:11:03.523729   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:03.533910   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:11:03.533972   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:03.544890   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:11:03.544955   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:03.554721   21713 logs.go:276] 0 containers: []
	W0318 05:11:03.554732   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:03.554782   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:03.565431   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:11:03.565450   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:03.565456   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:11:03.600602   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:03.600697   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:03.602761   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:11:03.602765   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:11:03.623312   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:11:03.623324   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:11:03.637321   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:11:03.637332   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:11:03.648917   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:11:03.648928   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:11:03.660894   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:11:03.660905   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:03.672548   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:03.672559   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:03.713560   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:11:03.713572   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:11:03.732179   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:11:03.732190   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:11:03.755640   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:03.755653   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:03.780951   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:03.780966   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:03.785849   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:11:03.785858   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:11:03.798046   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:11:03.798058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:11:03.810785   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:11:03.810796   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:11:03.822923   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:11:03.822935   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:11:03.834606   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:03.834616   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:11:03.834641   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:11:03.834646   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:03.834650   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:03.834655   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:03.834658   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:11:10.254167   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:10.254468   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:10.280659   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:10.280782   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:10.297789   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:10.297870   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:10.311278   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:10.311363   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:10.322617   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:10.322684   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:10.332790   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:10.332858   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:10.345125   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:10.345206   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:10.355757   21725 logs.go:276] 0 containers: []
	W0318 05:11:10.355771   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:10.355836   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:10.366612   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:10.366633   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:10.366639   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:10.381180   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:10.381193   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:10.395078   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:10.395092   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:10.407203   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:10.407218   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:10.425279   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:10.425291   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:10.448026   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:10.448033   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:10.460514   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:10.460528   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:10.472646   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:10.472661   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:10.487375   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:10.487388   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:10.498824   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:10.498836   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:10.510222   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:10.510236   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:10.514692   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:10.514702   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:10.552336   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:10.552349   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:10.566842   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:10.566856   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:10.600207   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:10.600220   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:13.118344   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:13.837379   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:18.120136   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:18.120314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:18.135702   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:18.135781   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:18.153773   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:18.153844   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:18.164674   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:18.164745   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:18.174829   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:18.174899   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:18.185829   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:18.185893   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:18.196251   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:18.196314   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:18.206735   21725 logs.go:276] 0 containers: []
	W0318 05:11:18.206748   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:18.206806   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:18.216815   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:18.216829   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:18.216834   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:18.221267   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:18.221275   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:18.232503   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:18.232514   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:18.248396   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:18.248407   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:18.266532   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:18.266543   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:18.290481   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:18.290490   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:18.303422   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:18.303435   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:18.337759   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:18.337767   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:18.374456   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:18.374468   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:18.385812   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:18.385822   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:18.398235   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:18.398247   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:18.412863   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:18.412876   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:18.427240   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:18.427257   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:18.438801   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:18.438811   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:18.450555   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:18.450568   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:20.971295   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:18.837112   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:18.837261   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:18.848763   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:11:18.848841   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:18.874237   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:11:18.874312   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:18.898056   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:11:18.898134   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:18.911464   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:11:18.911544   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:18.922091   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:11:18.922165   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:18.932673   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:11:18.932744   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:18.942674   21713 logs.go:276] 0 containers: []
	W0318 05:11:18.942687   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:18.942741   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:18.952814   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:11:18.952829   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:11:18.952834   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:11:18.964529   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:11:18.964540   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:11:18.979057   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:11:18.979074   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:11:18.990996   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:18.991007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:19.025164   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:11:19.025175   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:11:19.039713   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:11:19.039724   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:11:19.051703   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:11:19.051714   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:11:19.063337   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:11:19.063349   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:11:19.075100   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:11:19.075111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:11:19.089499   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:19.089510   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:19.114183   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:19.114191   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:11:19.150051   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:19.150143   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:19.152194   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:11:19.152199   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:11:19.169613   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:11:19.169624   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:11:19.181159   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:11:19.181173   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:19.193367   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:19.193377   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:19.197905   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:19.197916   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:11:19.197938   21713 out.go:239] X Problems detected in kubelet:
	W0318 05:11:19.197943   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:19.197951   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:19.197955   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:19.197957   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:11:25.973116   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:25.973329   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:25.985296   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:25.985376   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:25.996332   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:25.996396   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:26.006895   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:26.006971   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:26.017628   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:26.017701   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:26.027591   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:26.027656   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:26.038145   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:26.038216   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:26.048322   21725 logs.go:276] 0 containers: []
	W0318 05:11:26.048333   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:26.048387   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:26.060202   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:26.060221   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:26.060227   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:26.072194   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:26.072206   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:26.086980   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:26.086990   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:26.099082   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:26.099093   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:26.133869   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:26.133881   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:26.148416   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:26.148427   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:26.159606   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:26.159616   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:26.171141   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:26.171150   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:26.176061   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:26.176070   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:26.190647   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:26.190659   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:26.202753   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:26.202764   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:26.227064   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:26.227074   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:26.238601   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:26.238615   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:26.273302   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:26.273309   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:26.284873   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:26.284883   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:28.807102   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:29.201353   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:34.203395   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:34.207695   21713 out.go:177] 
	W0318 05:11:34.211772   21713 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 05:11:34.211779   21713 out.go:239] * 
	W0318 05:11:34.212285   21713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:11:34.222605   21713 out.go:177] 
	I0318 05:11:33.809072   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:33.809198   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:33.820338   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:33.820418   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:33.831434   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:33.831506   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:33.842182   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:33.842254   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:33.857320   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:33.857392   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:33.870951   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:33.871022   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:33.881414   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:33.881496   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:33.896511   21725 logs.go:276] 0 containers: []
	W0318 05:11:33.896526   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:33.896589   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:33.908092   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:33.908109   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:33.908115   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:33.941445   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:33.941454   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:33.956743   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:33.956753   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:33.974127   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:33.974138   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:33.986018   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:33.986028   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:33.997729   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:33.997741   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:34.010157   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:34.010169   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:34.021589   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:34.021601   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:34.025932   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:34.025940   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:34.060934   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:34.060946   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:34.075189   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:34.075200   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:34.087192   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:34.087204   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:34.098996   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:34.099008   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:34.112811   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:34.112821   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:34.124157   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:34.124167   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:36.649051   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:41.651110   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:41.651362   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:41.682507   21725 logs.go:276] 1 containers: [702f29aaa46e]
	I0318 05:11:41.682625   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:41.698140   21725 logs.go:276] 1 containers: [5b71b688ca95]
	I0318 05:11:41.698236   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:41.710891   21725 logs.go:276] 4 containers: [52877a5aee47 653552bfe323 27b1f1e110ed 7ab83b2ece4e]
	I0318 05:11:41.710973   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:41.722174   21725 logs.go:276] 1 containers: [3b8d75585331]
	I0318 05:11:41.722243   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:41.733061   21725 logs.go:276] 1 containers: [ec54c6127165]
	I0318 05:11:41.733135   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:41.746700   21725 logs.go:276] 1 containers: [fa7fdd57ee2f]
	I0318 05:11:41.746775   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:41.757255   21725 logs.go:276] 0 containers: []
	W0318 05:11:41.757269   21725 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:41.757327   21725 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:41.767756   21725 logs.go:276] 1 containers: [9b5906d4a48f]
	I0318 05:11:41.767774   21725 logs.go:123] Gathering logs for coredns [653552bfe323] ...
	I0318 05:11:41.767780   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 653552bfe323"
	I0318 05:11:41.779584   21725 logs.go:123] Gathering logs for kube-controller-manager [fa7fdd57ee2f] ...
	I0318 05:11:41.779597   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa7fdd57ee2f"
	I0318 05:11:41.797337   21725 logs.go:123] Gathering logs for container status ...
	I0318 05:11:41.797347   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:41.808704   21725 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:41.808715   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:41.843887   21725 logs.go:123] Gathering logs for kube-apiserver [702f29aaa46e] ...
	I0318 05:11:41.843899   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 702f29aaa46e"
	I0318 05:11:41.859081   21725 logs.go:123] Gathering logs for coredns [52877a5aee47] ...
	I0318 05:11:41.859094   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52877a5aee47"
	I0318 05:11:41.871844   21725 logs.go:123] Gathering logs for coredns [7ab83b2ece4e] ...
	I0318 05:11:41.871856   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab83b2ece4e"
	I0318 05:11:41.883906   21725 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:41.883918   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:11:41.919458   21725 logs.go:123] Gathering logs for kube-proxy [ec54c6127165] ...
	I0318 05:11:41.919469   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec54c6127165"
	I0318 05:11:41.932231   21725 logs.go:123] Gathering logs for kube-scheduler [3b8d75585331] ...
	I0318 05:11:41.932244   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8d75585331"
	I0318 05:11:41.947411   21725 logs.go:123] Gathering logs for storage-provisioner [9b5906d4a48f] ...
	I0318 05:11:41.947426   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5906d4a48f"
	I0318 05:11:41.959341   21725 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:41.959352   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:41.982013   21725 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:41.982022   21725 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:41.986472   21725 logs.go:123] Gathering logs for etcd [5b71b688ca95] ...
	I0318 05:11:41.986482   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b71b688ca95"
	I0318 05:11:42.000103   21725 logs.go:123] Gathering logs for coredns [27b1f1e110ed] ...
	I0318 05:11:42.000116   21725 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b1f1e110ed"
	I0318 05:11:44.513871   21725 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:49.516111   21725 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:49.520025   21725 out.go:177] 
	W0318 05:11:49.524039   21725 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 05:11:49.524057   21725 out.go:239] * 
	W0318 05:11:49.525626   21725 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:11:49.536004   21725 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-18 12:02:25 UTC, ends at Mon 2024-03-18 12:12:05 UTC. --
	Mar 18 12:11:49 running-upgrade-349000 dockerd[4584]: time="2024-03-18T12:11:49.967377294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:11:49 running-upgrade-349000 dockerd[4584]: time="2024-03-18T12:11:49.967475665Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9e00c26815ef166c9d5bb56cb710b53a1e192d9cc02e2ec4be5e9bbe13a774bc pid=20582 runtime=io.containerd.runc.v2
	Mar 18 12:11:50 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:50Z" level=error msg="ContainerStats resp: {0x40007be600 linux}"
	Mar 18 12:11:50 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:50Z" level=error msg="ContainerStats resp: {0x400084cd80 linux}"
	Mar 18 12:11:51 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:51Z" level=error msg="ContainerStats resp: {0x4000969040 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x4000457e40 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x40005b6280 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x4000822640 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x40005b6900 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x4000823280 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x4000823640 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=error msg="ContainerStats resp: {0x4000823c80 linux}"
	Mar 18 12:11:52 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 12:11:57 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:11:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 12:12:02 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:02Z" level=error msg="ContainerStats resp: {0x4000457800 linux}"
	Mar 18 12:12:02 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:02Z" level=error msg="ContainerStats resp: {0x4000457940 linux}"
	Mar 18 12:12:02 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 12:12:03 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:03Z" level=error msg="ContainerStats resp: {0x4000968a40 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x4000969c80 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x40004fc540 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x40007a9680 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x40007a9a80 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x400018c800 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x40005b6340 linux}"
	Mar 18 12:12:04 running-upgrade-349000 cri-dockerd[4308]: time="2024-03-18T12:12:04Z" level=error msg="ContainerStats resp: {0x40005b6680 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9e00c26815ef1       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   0b972834fc233
	dfa9291818606       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   821674f638b00
	52877a5aee47e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0b972834fc233
	653552bfe3235       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   821674f638b00
	9b5906d4a48f7       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   2f0731378d64d
	ec54c61271655       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c70493256dded
	5b71b688ca952       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0f4d38f0cf841
	702f29aaa46e5       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   09a23e8afc4b1
	fa7fdd57ee2f6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   928886f33fc86
	3b8d755853312       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5b451cc97959a
	
	
	==> coredns [52877a5aee47] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:52948->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:39297->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:36822->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:53891->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:46897->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:50542->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:50860->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:50029->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:52804->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2509149266895314268.6403402582168576712. HINFO: read udp 10.244.0.2:33359->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [653552bfe323] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:35307->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:39949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:60132->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:37120->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:43471->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:60018->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:54699->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:39488->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:32913->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5424061428469481249.9042590705443508823. HINFO: read udp 10.244.0.3:38770->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9e00c26815ef] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2267822227635593356.5179429034867656873. HINFO: read udp 10.244.0.2:54303->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2267822227635593356.5179429034867656873. HINFO: read udp 10.244.0.2:41650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2267822227635593356.5179429034867656873. HINFO: read udp 10.244.0.2:59607->10.0.2.3:53: i/o timeout
	
	
	==> coredns [dfa929181860] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4993107556306489572.7577024621644527661. HINFO: read udp 10.244.0.3:38150->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4993107556306489572.7577024621644527661. HINFO: read udp 10.244.0.3:56420->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4993107556306489572.7577024621644527661. HINFO: read udp 10.244.0.3:47167->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-349000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-349000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de
	                    minikube.k8s.io/name=running-upgrade-349000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T05_07_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:07:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-349000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:07:48 +0000   Mon, 18 Mar 2024 12:07:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:07:48 +0000   Mon, 18 Mar 2024 12:07:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:07:48 +0000   Mon, 18 Mar 2024 12:07:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:07:48 +0000   Mon, 18 Mar 2024 12:07:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-349000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 50081e6ad99841da99995f271fed653c
	  System UUID:                50081e6ad99841da99995f271fed653c
	  Boot ID:                    1ee2e12d-549d-48e5-af84-906d43f8e53a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-flzjp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vqmw9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-349000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-349000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-349000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-rgszz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-349000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-349000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-349000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-349000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-349000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-349000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-349000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-349000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-349000 event: Registered Node running-upgrade-349000 in Controller
	
	
	==> dmesg <==
	[  +0.059231] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.060480] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.143351] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.068539] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.058714] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.569807] systemd-fstab-generator[1294]: Ignoring "noauto" for root device
	[ +14.117049] systemd-fstab-generator[1952]: Ignoring "noauto" for root device
	[Mar18 12:03] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.817298] systemd-fstab-generator[2736]: Ignoring "noauto" for root device
	[  +0.157171] systemd-fstab-generator[2769]: Ignoring "noauto" for root device
	[  +0.103291] systemd-fstab-generator[2780]: Ignoring "noauto" for root device
	[  +0.107996] systemd-fstab-generator[2793]: Ignoring "noauto" for root device
	[  +4.422388] kauditd_printk_skb: 16 callbacks suppressed
	[ +12.427691] systemd-fstab-generator[4265]: Ignoring "noauto" for root device
	[  +0.096368] systemd-fstab-generator[4276]: Ignoring "noauto" for root device
	[  +0.085939] systemd-fstab-generator[4287]: Ignoring "noauto" for root device
	[  +0.096678] systemd-fstab-generator[4301]: Ignoring "noauto" for root device
	[  +2.012212] systemd-fstab-generator[4454]: Ignoring "noauto" for root device
	[  +4.546264] systemd-fstab-generator[4932]: Ignoring "noauto" for root device
	[  +0.983728] systemd-fstab-generator[5060]: Ignoring "noauto" for root device
	[  +7.255356] kauditd_printk_skb: 80 callbacks suppressed
	[ +11.887627] kauditd_printk_skb: 5 callbacks suppressed
	[Mar18 12:07] systemd-fstab-generator[13874]: Ignoring "noauto" for root device
	[  +6.127553] systemd-fstab-generator[14487]: Ignoring "noauto" for root device
	[  +0.476939] systemd-fstab-generator[14618]: Ignoring "noauto" for root device
	
	
	==> etcd [5b71b688ca95] <==
	{"level":"info","ts":"2024-03-18T12:07:43.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-18T12:07:43.900Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-18T12:07:43.900Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T12:07:43.901Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T12:07:43.901Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T12:07:43.901Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T12:07:43.901Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T12:07:44.643Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-349000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:07:44.644Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:07:44.645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T12:07:44.645Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T12:07:44.645Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-18T12:07:44.645Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:12:06 up 9 min,  0 users,  load average: 0.78, 0.42, 0.22
	Linux running-upgrade-349000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [702f29aaa46e] <==
	I0318 12:07:45.891267       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0318 12:07:45.895570       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:07:45.897930       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 12:07:45.897993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:07:45.902750       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 12:07:45.902804       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:07:45.907929       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0318 12:07:46.623111       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 12:07:46.804418       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 12:07:46.808061       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 12:07:46.808853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:07:46.958624       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:07:46.968650       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:07:47.070164       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0318 12:07:47.072442       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0318 12:07:47.072820       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 12:07:47.074349       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:07:47.930899       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 12:07:48.673060       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 12:07:48.679520       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0318 12:07:48.688101       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 12:07:48.736834       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:08:01.034829       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0318 12:08:01.585358       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0318 12:08:02.097293       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [fa7fdd57ee2f] <==
	I0318 12:08:00.789193       1 shared_informer.go:262] Caches are synced for node
	I0318 12:08:00.789208       1 range_allocator.go:173] Starting range CIDR allocator
	I0318 12:08:00.789210       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0318 12:08:00.789214       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0318 12:08:00.791855       1 range_allocator.go:374] Set node running-upgrade-349000 PodCIDR to [10.244.0.0/24]
	I0318 12:08:00.797158       1 shared_informer.go:262] Caches are synced for endpoint
	I0318 12:08:00.833073       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0318 12:08:00.864248       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0318 12:08:00.885079       1 shared_informer.go:262] Caches are synced for stateful set
	I0318 12:08:00.903146       1 shared_informer.go:262] Caches are synced for disruption
	I0318 12:08:00.903157       1 disruption.go:371] Sending events to api server.
	I0318 12:08:00.908393       1 shared_informer.go:262] Caches are synced for attach detach
	I0318 12:08:00.932820       1 shared_informer.go:262] Caches are synced for persistent volume
	I0318 12:08:00.952045       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 12:08:00.958647       1 shared_informer.go:262] Caches are synced for expand
	I0318 12:08:00.981253       1 shared_informer.go:262] Caches are synced for PV protection
	I0318 12:08:00.987545       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 12:08:01.033274       1 shared_informer.go:262] Caches are synced for HPA
	I0318 12:08:01.036341       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0318 12:08:01.400628       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 12:08:01.480460       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 12:08:01.480487       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 12:08:01.588032       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rgszz"
	I0318 12:08:01.786895       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-flzjp"
	I0318 12:08:01.789796       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vqmw9"
	
	
	==> kube-proxy [ec54c6127165] <==
	I0318 12:08:02.084212       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0318 12:08:02.084238       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0318 12:08:02.084354       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 12:08:02.095183       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 12:08:02.095196       1 server_others.go:206] "Using iptables Proxier"
	I0318 12:08:02.095210       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 12:08:02.095296       1 server.go:661] "Version info" version="v1.24.1"
	I0318 12:08:02.095300       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:08:02.095520       1 config.go:317] "Starting service config controller"
	I0318 12:08:02.095531       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 12:08:02.095543       1 config.go:226] "Starting endpoint slice config controller"
	I0318 12:08:02.095545       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 12:08:02.096314       1 config.go:444] "Starting node config controller"
	I0318 12:08:02.096328       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 12:08:02.196021       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0318 12:08:02.196051       1 shared_informer.go:262] Caches are synced for service config
	I0318 12:08:02.197323       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3b8d75585331] <==
	W0318 12:07:45.859566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:07:45.859588       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:07:45.859770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:07:45.859796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:07:45.859832       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:07:45.859851       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:07:45.859876       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:07:45.859911       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 12:07:45.859937       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:07:45.860019       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:07:45.860052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 12:07:45.860140       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:07:45.860099       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:07:45.860199       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:07:45.860114       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:07:45.860258       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:07:45.860133       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:07:45.860299       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:07:46.671695       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:07:46.671761       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:07:46.839274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:07:46.839517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:07:46.910385       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:07:46.910485       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 12:07:47.439628       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-18 12:02:25 UTC, ends at Mon 2024-03-18 12:12:06 UTC. --
	Mar 18 12:07:50 running-upgrade-349000 kubelet[14493]: E0318 12:07:50.908210   14493 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-349000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-349000"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: I0318 12:08:00.712023   14493 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: I0318 12:08:00.891112   14493 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: I0318 12:08:00.891137   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7kr\" (UniqueName: \"kubernetes.io/projected/f220936d-e28b-4cb3-b853-02bfb5ead218-kube-api-access-kn7kr\") pod \"storage-provisioner\" (UID: \"f220936d-e28b-4cb3-b853-02bfb5ead218\") " pod="kube-system/storage-provisioner"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: I0318 12:08:00.891153   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f220936d-e28b-4cb3-b853-02bfb5ead218-tmp\") pod \"storage-provisioner\" (UID: \"f220936d-e28b-4cb3-b853-02bfb5ead218\") " pod="kube-system/storage-provisioner"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: I0318 12:08:00.891481   14493 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: E0318 12:08:00.995307   14493 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: E0318 12:08:00.995328   14493 projected.go:192] Error preparing data for projected volume kube-api-access-kn7kr for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 18 12:08:00 running-upgrade-349000 kubelet[14493]: E0318 12:08:00.995366   14493 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f220936d-e28b-4cb3-b853-02bfb5ead218-kube-api-access-kn7kr podName:f220936d-e28b-4cb3-b853-02bfb5ead218 nodeName:}" failed. No retries permitted until 2024-03-18 12:08:01.495353029 +0000 UTC m=+12.835452957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kn7kr" (UniqueName: "kubernetes.io/projected/f220936d-e28b-4cb3-b853-02bfb5ead218-kube-api-access-kn7kr") pod "storage-provisioner" (UID: "f220936d-e28b-4cb3-b853-02bfb5ead218") : configmap "kube-root-ca.crt" not found
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.592104   14493 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.594143   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b794adf-2156-419f-bfd0-60bfa772de12-lib-modules\") pod \"kube-proxy-rgszz\" (UID: \"6b794adf-2156-419f-bfd0-60bfa772de12\") " pod="kube-system/kube-proxy-rgszz"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.594172   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b794adf-2156-419f-bfd0-60bfa772de12-kube-proxy\") pod \"kube-proxy-rgszz\" (UID: \"6b794adf-2156-419f-bfd0-60bfa772de12\") " pod="kube-system/kube-proxy-rgszz"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.594183   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b794adf-2156-419f-bfd0-60bfa772de12-xtables-lock\") pod \"kube-proxy-rgszz\" (UID: \"6b794adf-2156-419f-bfd0-60bfa772de12\") " pod="kube-system/kube-proxy-rgszz"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.594192   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtjv\" (UniqueName: \"kubernetes.io/projected/6b794adf-2156-419f-bfd0-60bfa772de12-kube-api-access-vvtjv\") pod \"kube-proxy-rgszz\" (UID: \"6b794adf-2156-419f-bfd0-60bfa772de12\") " pod="kube-system/kube-proxy-rgszz"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: E0318 12:08:01.594253   14493 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: E0318 12:08:01.594263   14493 projected.go:192] Error preparing data for projected volume kube-api-access-kn7kr for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: E0318 12:08:01.594283   14493 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f220936d-e28b-4cb3-b853-02bfb5ead218-kube-api-access-kn7kr podName:f220936d-e28b-4cb3-b853-02bfb5ead218 nodeName:}" failed. No retries permitted until 2024-03-18 12:08:02.594275512 +0000 UTC m=+13.934375440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kn7kr" (UniqueName: "kubernetes.io/projected/f220936d-e28b-4cb3-b853-02bfb5ead218-kube-api-access-kn7kr") pod "storage-provisioner" (UID: "f220936d-e28b-4cb3-b853-02bfb5ead218") : configmap "kube-root-ca.crt" not found
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.791521   14493 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.795492   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9cvj\" (UniqueName: \"kubernetes.io/projected/432f0606-0923-4bfb-8251-fdef6c9fc890-kube-api-access-d9cvj\") pod \"coredns-6d4b75cb6d-flzjp\" (UID: \"432f0606-0923-4bfb-8251-fdef6c9fc890\") " pod="kube-system/coredns-6d4b75cb6d-flzjp"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.795595   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432f0606-0923-4bfb-8251-fdef6c9fc890-config-volume\") pod \"coredns-6d4b75cb6d-flzjp\" (UID: \"432f0606-0923-4bfb-8251-fdef6c9fc890\") " pod="kube-system/coredns-6d4b75cb6d-flzjp"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.798269   14493 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.895757   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9515029-9812-48b2-b07a-d83cf7fa9238-config-volume\") pod \"coredns-6d4b75cb6d-vqmw9\" (UID: \"a9515029-9812-48b2-b07a-d83cf7fa9238\") " pod="kube-system/coredns-6d4b75cb6d-vqmw9"
	Mar 18 12:08:01 running-upgrade-349000 kubelet[14493]: I0318 12:08:01.895799   14493 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc7c\" (UniqueName: \"kubernetes.io/projected/a9515029-9812-48b2-b07a-d83cf7fa9238-kube-api-access-cgc7c\") pod \"coredns-6d4b75cb6d-vqmw9\" (UID: \"a9515029-9812-48b2-b07a-d83cf7fa9238\") " pod="kube-system/coredns-6d4b75cb6d-vqmw9"
	Mar 18 12:11:50 running-upgrade-349000 kubelet[14493]: I0318 12:11:50.099368   14493 scope.go:110] "RemoveContainer" containerID="27b1f1e110ed60337516d8f99b7158f08ad47f6437842ac782281e8fd333709c"
	Mar 18 12:11:50 running-upgrade-349000 kubelet[14493]: I0318 12:11:50.113812   14493 scope.go:110] "RemoveContainer" containerID="7ab83b2ece4e181c515fdeda1b70a1d909ecbd6827cfe5421b4ae7af6b9558f2"
	
	
	==> storage-provisioner [9b5906d4a48f] <==
	I0318 12:08:03.014804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 12:08:03.020057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 12:08:03.020074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 12:08:03.026651       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 12:08:03.026707       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-349000_c6eb5701-b441-49f6-8f24-2fa96410e19a!
	I0318 12:08:03.026919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5b48c91-9212-4a6a-b78c-4ba2333ada19", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-349000_c6eb5701-b441-49f6-8f24-2fa96410e19a became leader
	I0318 12:08:03.127262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-349000_c6eb5701-b441-49f6-8f24-2fa96410e19a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-349000 -n running-upgrade-349000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-349000 -n running-upgrade-349000: exit status 2 (15.761891375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-349000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-349000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-349000
--- FAIL: TestRunningBinaryUpgrade (662.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.858353625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-304000" primary control-plane node in "kubernetes-upgrade-304000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-304000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:01:02.754338   21608 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:01:02.754471   21608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:02.754475   21608 out.go:304] Setting ErrFile to fd 2...
	I0318 05:01:02.754478   21608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:02.754616   21608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:01:02.755681   21608 out.go:298] Setting JSON to false
	I0318 05:01:02.771597   21608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10835,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:01:02.771655   21608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:01:02.776270   21608 out.go:177] * [kubernetes-upgrade-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:01:02.791269   21608 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:01:02.796054   21608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:01:02.791302   21608 notify.go:220] Checking for updates...
	I0318 05:01:02.805162   21608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:01:02.808069   21608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:01:02.811171   21608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:01:02.814187   21608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:01:02.815945   21608 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:01:02.816025   21608 config.go:182] Loaded profile config "offline-docker-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:01:02.816073   21608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:01:02.820205   21608 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:01:02.830168   21608 start.go:297] selected driver: qemu2
	I0318 05:01:02.830177   21608 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:01:02.830182   21608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:01:02.832436   21608 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:01:02.835162   21608 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:01:02.838253   21608 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 05:01:02.838283   21608 cni.go:84] Creating CNI manager for ""
	I0318 05:01:02.838290   21608 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 05:01:02.838319   21608 start.go:340] cluster config:
	{Name:kubernetes-upgrade-304000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:01:02.843317   21608 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:01:02.850168   21608 out.go:177] * Starting "kubernetes-upgrade-304000" primary control-plane node in "kubernetes-upgrade-304000" cluster
	I0318 05:01:02.854123   21608 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 05:01:02.854137   21608 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 05:01:02.854144   21608 cache.go:56] Caching tarball of preloaded images
	I0318 05:01:02.854205   21608 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:01:02.854211   21608 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 05:01:02.854277   21608 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kubernetes-upgrade-304000/config.json ...
	I0318 05:01:02.854288   21608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kubernetes-upgrade-304000/config.json: {Name:mkbbc44d3d2480251ff135aafb661a9523ecc334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:01:02.854527   21608 start.go:360] acquireMachinesLock for kubernetes-upgrade-304000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:02.854560   21608 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "kubernetes-upgrade-304000"
	I0318 05:01:02.854575   21608 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:01:02.854610   21608 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:01:02.863154   21608 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:01:02.881145   21608 start.go:159] libmachine.API.Create for "kubernetes-upgrade-304000" (driver="qemu2")
	I0318 05:01:02.881173   21608 client.go:168] LocalClient.Create starting
	I0318 05:01:02.881233   21608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:01:02.881263   21608 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:02.881273   21608 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:02.881320   21608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:01:02.881353   21608 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:02.881362   21608 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:02.881761   21608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:01:03.022754   21608 main.go:141] libmachine: Creating SSH key...
	I0318 05:01:03.171379   21608 main.go:141] libmachine: Creating Disk image...
	I0318 05:01:03.171390   21608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:01:03.171576   21608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:03.184026   21608 main.go:141] libmachine: STDOUT: 
	I0318 05:01:03.184043   21608 main.go:141] libmachine: STDERR: 
	I0318 05:01:03.184100   21608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2 +20000M
	I0318 05:01:03.194585   21608 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:01:03.194600   21608 main.go:141] libmachine: STDERR: 
	I0318 05:01:03.194613   21608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:03.194618   21608 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:01:03.194664   21608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:49:a9:48:4e:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:03.196362   21608 main.go:141] libmachine: STDOUT: 
	I0318 05:01:03.196376   21608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:03.196394   21608 client.go:171] duration metric: took 315.225792ms to LocalClient.Create
	I0318 05:01:05.198560   21608 start.go:128] duration metric: took 2.343996291s to createHost
	I0318 05:01:05.198687   21608 start.go:83] releasing machines lock for "kubernetes-upgrade-304000", held for 2.344150583s
	W0318 05:01:05.198767   21608 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:05.214895   21608 out.go:177] * Deleting "kubernetes-upgrade-304000" in qemu2 ...
	W0318 05:01:05.240365   21608 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:05.240396   21608 start.go:728] Will try again in 5 seconds ...
	I0318 05:01:10.242346   21608 start.go:360] acquireMachinesLock for kubernetes-upgrade-304000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:10.242447   21608 start.go:364] duration metric: took 79.25µs to acquireMachinesLock for "kubernetes-upgrade-304000"
	I0318 05:01:10.242472   21608 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:01:10.242531   21608 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:01:10.252095   21608 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:01:10.271271   21608 start.go:159] libmachine.API.Create for "kubernetes-upgrade-304000" (driver="qemu2")
	I0318 05:01:10.271307   21608 client.go:168] LocalClient.Create starting
	I0318 05:01:10.271372   21608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:01:10.271407   21608 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:10.271430   21608 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:10.271469   21608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:01:10.271493   21608 main.go:141] libmachine: Decoding PEM data...
	I0318 05:01:10.271499   21608 main.go:141] libmachine: Parsing certificate...
	I0318 05:01:10.271840   21608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:01:10.411823   21608 main.go:141] libmachine: Creating SSH key...
	I0318 05:01:10.508761   21608 main.go:141] libmachine: Creating Disk image...
	I0318 05:01:10.508774   21608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:01:10.508962   21608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:10.521536   21608 main.go:141] libmachine: STDOUT: 
	I0318 05:01:10.521560   21608 main.go:141] libmachine: STDERR: 
	I0318 05:01:10.521615   21608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2 +20000M
	I0318 05:01:10.532341   21608 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:01:10.532364   21608 main.go:141] libmachine: STDERR: 
	I0318 05:01:10.532386   21608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:10.532394   21608 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:01:10.532427   21608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:7a:ae:d2:c1:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:10.534184   21608 main.go:141] libmachine: STDOUT: 
	I0318 05:01:10.534200   21608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:10.534213   21608 client.go:171] duration metric: took 262.90975ms to LocalClient.Create
	I0318 05:01:12.536374   21608 start.go:128] duration metric: took 2.293882959s to createHost
	I0318 05:01:12.536447   21608 start.go:83] releasing machines lock for "kubernetes-upgrade-304000", held for 2.294061542s
	W0318 05:01:12.537081   21608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:12.550842   21608 out.go:177] 
	W0318 05:01:12.555903   21608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:01:12.555942   21608 out.go:239] * 
	* 
	W0318 05:01:12.558436   21608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:01:12.568828   21608 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-304000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-304000: (1.968434584s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-304000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-304000 status --format={{.Host}}: exit status 7 (66.949542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.207517666s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-304000" primary control-plane node in "kubernetes-upgrade-304000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-304000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-304000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:01:14.653029   21652 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:01:14.653168   21652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:14.653171   21652 out.go:304] Setting ErrFile to fd 2...
	I0318 05:01:14.653174   21652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:01:14.653289   21652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:01:14.654294   21652 out.go:298] Setting JSON to false
	I0318 05:01:14.670182   21652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10847,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:01:14.670250   21652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:01:14.675378   21652 out.go:177] * [kubernetes-upgrade-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:01:14.683266   21652 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:01:14.687329   21652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:01:14.683297   21652 notify.go:220] Checking for updates...
	I0318 05:01:14.690252   21652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:01:14.694299   21652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:01:14.702202   21652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:01:14.710118   21652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:01:14.714565   21652 config.go:182] Loaded profile config "kubernetes-upgrade-304000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 05:01:14.714819   21652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:01:14.720128   21652 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:01:14.727215   21652 start.go:297] selected driver: qemu2
	I0318 05:01:14.727221   21652 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:01:14.727279   21652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:01:14.729655   21652 cni.go:84] Creating CNI manager for ""
	I0318 05:01:14.729674   21652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:01:14.729699   21652 start.go:340] cluster config:
	{Name:kubernetes-upgrade-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-304000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:01:14.734211   21652 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:01:14.742275   21652 out.go:177] * Starting "kubernetes-upgrade-304000" primary control-plane node in "kubernetes-upgrade-304000" cluster
	I0318 05:01:14.746252   21652 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 05:01:14.746267   21652 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 05:01:14.746276   21652 cache.go:56] Caching tarball of preloaded images
	I0318 05:01:14.746334   21652 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:01:14.746340   21652 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 05:01:14.746392   21652 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kubernetes-upgrade-304000/config.json ...
	I0318 05:01:14.746780   21652 start.go:360] acquireMachinesLock for kubernetes-upgrade-304000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:14.746812   21652 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "kubernetes-upgrade-304000"
	I0318 05:01:14.746822   21652 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:01:14.746827   21652 fix.go:54] fixHost starting: 
	I0318 05:01:14.746941   21652 fix.go:112] recreateIfNeeded on kubernetes-upgrade-304000: state=Stopped err=<nil>
	W0318 05:01:14.746950   21652 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:01:14.755323   21652 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-304000" ...
	I0318 05:01:14.759372   21652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:7a:ae:d2:c1:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:14.761338   21652 main.go:141] libmachine: STDOUT: 
	I0318 05:01:14.761355   21652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:14.761384   21652 fix.go:56] duration metric: took 14.558125ms for fixHost
	I0318 05:01:14.761389   21652 start.go:83] releasing machines lock for "kubernetes-upgrade-304000", held for 14.572375ms
	W0318 05:01:14.761400   21652 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:01:14.761427   21652 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:14.761431   21652 start.go:728] Will try again in 5 seconds ...
	I0318 05:01:19.762910   21652 start.go:360] acquireMachinesLock for kubernetes-upgrade-304000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:01:19.763379   21652 start.go:364] duration metric: took 328.458µs to acquireMachinesLock for "kubernetes-upgrade-304000"
	I0318 05:01:19.763500   21652 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:01:19.763521   21652 fix.go:54] fixHost starting: 
	I0318 05:01:19.764180   21652 fix.go:112] recreateIfNeeded on kubernetes-upgrade-304000: state=Stopped err=<nil>
	W0318 05:01:19.764209   21652 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:01:19.772120   21652 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-304000" ...
	I0318 05:01:19.777482   21652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:7a:ae:d2:c1:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubernetes-upgrade-304000/disk.qcow2
	I0318 05:01:19.787836   21652 main.go:141] libmachine: STDOUT: 
	I0318 05:01:19.787928   21652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:01:19.788037   21652 fix.go:56] duration metric: took 24.514417ms for fixHost
	I0318 05:01:19.788060   21652 start.go:83] releasing machines lock for "kubernetes-upgrade-304000", held for 24.651583ms
	W0318 05:01:19.788331   21652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-304000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-304000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:01:19.797155   21652 out.go:177] 
	W0318 05:01:19.801097   21652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:01:19.801188   21652 out.go:239] * 
	* 
	W0318 05:01:19.803787   21652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:01:19.815073   21652 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-304000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-304000 version --output=json: exit status 1 (64.974167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-304000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-18 05:01:19.895598 -0700 PDT m=+798.893538918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-304000 -n kubernetes-upgrade-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-304000 -n kubernetes-upgrade-304000: exit status 7 (35.747417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-304000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-304000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-304000
--- FAIL: TestKubernetesUpgrade (17.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (619.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.358694266 start -p stopped-upgrade-211000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.358694266 start -p stopped-upgrade-211000 --memory=2200 --vm-driver=qemu2 : (1m20.411482459s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.358694266 -p stopped-upgrade-211000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.358694266 -p stopped-upgrade-211000 stop: (12.132355167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m46.5084485s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-211000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-211000" primary control-plane node in "stopped-upgrade-211000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-211000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:02:47.808596   21713 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:02:47.808749   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:47.808753   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:02:47.808756   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:02:47.808890   21713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:02:47.809926   21713 out.go:298] Setting JSON to false
	I0318 05:02:47.828071   21713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10940,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:02:47.828139   21713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:02:47.833439   21713 out.go:177] * [stopped-upgrade-211000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:02:47.847295   21713 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:02:47.842463   21713 notify.go:220] Checking for updates...
	I0318 05:02:47.853447   21713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:02:47.857392   21713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:02:47.860412   21713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:02:47.863433   21713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:02:47.866350   21713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:02:47.869653   21713 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:02:47.873412   21713 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 05:02:47.876419   21713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:02:47.880401   21713 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:02:47.887292   21713 start.go:297] selected driver: qemu2
	I0318 05:02:47.887297   21713 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:47.887356   21713 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:02:47.890082   21713 cni.go:84] Creating CNI manager for ""
	I0318 05:02:47.890101   21713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:02:47.890128   21713 start.go:340] cluster config:
	{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:02:47.890181   21713 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:02:47.897369   21713 out.go:177] * Starting "stopped-upgrade-211000" primary control-plane node in "stopped-upgrade-211000" cluster
	I0318 05:02:47.901340   21713 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:02:47.901368   21713 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 05:02:47.901377   21713 cache.go:56] Caching tarball of preloaded images
	I0318 05:02:47.901435   21713 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:02:47.901442   21713 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 05:02:47.901497   21713 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0318 05:02:47.902004   21713 start.go:360] acquireMachinesLock for stopped-upgrade-211000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:02:47.902041   21713 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "stopped-upgrade-211000"
	I0318 05:02:47.902053   21713 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:02:47.902058   21713 fix.go:54] fixHost starting: 
	I0318 05:02:47.902179   21713 fix.go:112] recreateIfNeeded on stopped-upgrade-211000: state=Stopped err=<nil>
	W0318 05:02:47.902188   21713 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:02:47.910409   21713 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-211000" ...
	I0318 05:02:47.914434   21713 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/qemu.pid -nic user,model=virtio,hostfwd=tcp::54278-:22,hostfwd=tcp::54279-:2376,hostname=stopped-upgrade-211000 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/disk.qcow2
	I0318 05:02:47.966268   21713 main.go:141] libmachine: STDOUT: 
	I0318 05:02:47.966298   21713 main.go:141] libmachine: STDERR: 
	I0318 05:02:47.966303   21713 main.go:141] libmachine: Waiting for VM to start (ssh -p 54278 docker@127.0.0.1)...
	I0318 05:03:07.324516   21713 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0318 05:03:07.324749   21713 machine.go:94] provisionDockerMachine start ...
	I0318 05:03:07.324793   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.324931   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.324935   21713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 05:03:07.387993   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 05:03:07.388009   21713 buildroot.go:166] provisioning hostname "stopped-upgrade-211000"
	I0318 05:03:07.388072   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.388178   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.388184   21713 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-211000 && echo "stopped-upgrade-211000" | sudo tee /etc/hostname
	I0318 05:03:07.452831   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-211000
	
	I0318 05:03:07.452878   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.452988   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.452998   21713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 05:03:07.514664   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 05:03:07.514676   21713 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18427-19517/.minikube CaCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18427-19517/.minikube}
	I0318 05:03:07.514683   21713 buildroot.go:174] setting up certificates
	I0318 05:03:07.514693   21713 provision.go:84] configureAuth start
	I0318 05:03:07.514699   21713 provision.go:143] copyHostCerts
	I0318 05:03:07.514768   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem, removing ...
	I0318 05:03:07.514774   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem
	I0318 05:03:07.514899   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.pem (1078 bytes)
	I0318 05:03:07.515071   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem, removing ...
	I0318 05:03:07.515075   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem
	I0318 05:03:07.515120   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/cert.pem (1123 bytes)
	I0318 05:03:07.515210   21713 exec_runner.go:144] found /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem, removing ...
	I0318 05:03:07.515213   21713 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem
	I0318 05:03:07.515303   21713 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18427-19517/.minikube/key.pem (1679 bytes)
	I0318 05:03:07.515397   21713 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-211000 san=[127.0.0.1 localhost minikube stopped-upgrade-211000]
	I0318 05:03:07.815777   21713 provision.go:177] copyRemoteCerts
	I0318 05:03:07.815829   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 05:03:07.815839   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:07.850297   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 05:03:07.857836   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 05:03:07.865429   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 05:03:07.872252   21713 provision.go:87] duration metric: took 357.561833ms to configureAuth
	I0318 05:03:07.872266   21713 buildroot.go:189] setting minikube options for container-runtime
	I0318 05:03:07.872384   21713 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:03:07.872424   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.872516   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.872521   21713 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 05:03:07.936278   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 05:03:07.936291   21713 buildroot.go:70] root file system type: tmpfs
	I0318 05:03:07.936353   21713 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 05:03:07.936411   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:07.936529   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:07.936563   21713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 05:03:08.004689   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 05:03:08.004762   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.004882   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:08.004894   21713 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 05:03:08.381067   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 05:03:08.381083   21713 machine.go:97] duration metric: took 1.056361334s to provisionDockerMachine
	I0318 05:03:08.381091   21713 start.go:293] postStartSetup for "stopped-upgrade-211000" (driver="qemu2")
	I0318 05:03:08.381097   21713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 05:03:08.381162   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 05:03:08.381173   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:08.418075   21713 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 05:03:08.419351   21713 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 05:03:08.419360   21713 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/addons for local assets ...
	I0318 05:03:08.419421   21713 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18427-19517/.minikube/files for local assets ...
	I0318 05:03:08.419517   21713 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem -> 199262.pem in /etc/ssl/certs
	I0318 05:03:08.419604   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 05:03:08.422738   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:08.429784   21713 start.go:296] duration metric: took 48.6895ms for postStartSetup
	I0318 05:03:08.429804   21713 fix.go:56] duration metric: took 20.528398875s for fixHost
	I0318 05:03:08.429834   21713 main.go:141] libmachine: Using SSH client type: native
	I0318 05:03:08.429939   21713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105279bf0] 0x10527c450 <nil>  [] 0s} localhost 54278 <nil> <nil>}
	I0318 05:03:08.429945   21713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 05:03:08.492090   21713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710763388.451731087
	
	I0318 05:03:08.492099   21713 fix.go:216] guest clock: 1710763388.451731087
	I0318 05:03:08.492103   21713 fix.go:229] Guest: 2024-03-18 05:03:08.451731087 -0700 PDT Remote: 2024-03-18 05:03:08.429806 -0700 PDT m=+20.658361418 (delta=21.925087ms)
	I0318 05:03:08.492114   21713 fix.go:200] guest clock delta is within tolerance: 21.925087ms
	I0318 05:03:08.492116   21713 start.go:83] releasing machines lock for "stopped-upgrade-211000", held for 20.59072425s
	I0318 05:03:08.492188   21713 ssh_runner.go:195] Run: cat /version.json
	I0318 05:03:08.492196   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:03:08.492286   21713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 05:03:08.492345   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	W0318 05:03:08.492866   21713 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54474->127.0.0.1:54278: read: connection reset by peer
	I0318 05:03:08.492884   21713 retry.go:31] will retry after 319.920115ms: ssh: handshake failed: read tcp 127.0.0.1:54474->127.0.0.1:54278: read: connection reset by peer
	W0318 05:03:08.523931   21713 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 05:03:08.523991   21713 ssh_runner.go:195] Run: systemctl --version
	I0318 05:03:08.525699   21713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 05:03:08.527208   21713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 05:03:08.527234   21713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 05:03:08.530471   21713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 05:03:08.535182   21713 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 05:03:08.535191   21713 start.go:494] detecting cgroup driver to use...
	I0318 05:03:08.535309   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:08.542405   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 05:03:08.546353   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 05:03:08.549866   21713 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 05:03:08.549899   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 05:03:08.553074   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:08.555938   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 05:03:08.559316   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 05:03:08.562726   21713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 05:03:08.566002   21713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 05:03:08.569158   21713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 05:03:08.571792   21713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 05:03:08.574879   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:08.657180   21713 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 05:03:08.663323   21713 start.go:494] detecting cgroup driver to use...
	I0318 05:03:08.663401   21713 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 05:03:08.670071   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:08.675631   21713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 05:03:08.688196   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 05:03:08.693233   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:08.697960   21713 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 05:03:08.750830   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 05:03:08.756475   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 05:03:08.761918   21713 ssh_runner.go:195] Run: which cri-dockerd
	I0318 05:03:08.763497   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 05:03:08.766740   21713 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 05:03:08.771898   21713 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 05:03:08.843137   21713 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 05:03:08.922042   21713 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 05:03:08.922107   21713 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 05:03:08.928086   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:09.004971   21713 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:10.146774   21713 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.141820333s)
	I0318 05:03:10.146846   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 05:03:10.152251   21713 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 05:03:10.161039   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:10.165672   21713 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 05:03:10.251042   21713 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 05:03:10.332659   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:10.417880   21713 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 05:03:10.424479   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 05:03:10.429512   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:10.516642   21713 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 05:03:10.554349   21713 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 05:03:10.554429   21713 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 05:03:10.557716   21713 start.go:562] Will wait 60s for crictl version
	I0318 05:03:10.557765   21713 ssh_runner.go:195] Run: which crictl
	I0318 05:03:10.558934   21713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 05:03:10.574151   21713 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 05:03:10.574220   21713 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:10.590344   21713 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 05:03:10.609643   21713 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 05:03:10.609709   21713 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 05:03:10.610905   21713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 05:03:10.615011   21713 kubeadm.go:877] updating cluster {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 05:03:10.615056   21713 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 05:03:10.615096   21713 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:10.626664   21713 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:10.626678   21713 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:10.626726   21713 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:10.630341   21713 ssh_runner.go:195] Run: which lz4
	I0318 05:03:10.631524   21713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 05:03:10.632850   21713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 05:03:10.632862   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 05:03:11.374146   21713 docker.go:649] duration metric: took 742.674125ms to copy over tarball
	I0318 05:03:11.374220   21713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 05:03:12.524947   21713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.150750458s)
	I0318 05:03:12.524966   21713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 05:03:12.541091   21713 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 05:03:12.544083   21713 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 05:03:12.549054   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:12.635173   21713 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 05:03:14.292376   21713 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.657236917s)
	I0318 05:03:14.292503   21713 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 05:03:14.309481   21713 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 05:03:14.309490   21713 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 05:03:14.309495   21713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 05:03:14.317934   21713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:14.317967   21713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:14.318054   21713 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:14.318239   21713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:14.318249   21713 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:14.318432   21713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:14.318757   21713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:14.319242   21713 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 05:03:14.326989   21713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:14.327018   21713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:14.327125   21713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:14.327206   21713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:14.327395   21713 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 05:03:14.327399   21713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:14.327710   21713 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:14.327948   21713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.280259   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.292117   21713 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 05:03:16.292149   21713 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.292213   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 05:03:16.303725   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 05:03:16.333359   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 05:03:16.345135   21713 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 05:03:16.345157   21713 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 05:03:16.345215   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 05:03:16.357519   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 05:03:16.357635   21713 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 05:03:16.359342   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 05:03:16.359365   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 05:03:16.363762   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:16.367352   21713 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 05:03:16.367366   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 05:03:16.378978   21713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 05:03:16.378999   21713 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 05:03:16.379057   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0318 05:03:16.394352   21713 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:16.394501   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.394737   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.396780   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.406946   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 05:03:16.407005   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 05:03:16.414800   21713 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 05:03:16.414821   21713 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.414837   21713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 05:03:16.414851   21713 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.414876   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 05:03:16.414876   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 05:03:16.418218   21713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 05:03:16.418233   21713 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.418277   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 05:03:16.427167   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.436518   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 05:03:16.436541   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 05:03:16.436636   21713 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:16.447965   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 05:03:16.447990   21713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 05:03:16.448007   21713 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.448023   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 05:03:16.448040   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 05:03:16.448047   21713 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 05:03:16.476046   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 05:03:16.490607   21713 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 05:03:16.490621   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 05:03:16.524224   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0318 05:03:16.894212   21713 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 05:03:16.894739   21713 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.933331   21713 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 05:03:16.933369   21713 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.933472   21713 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:03:16.961600   21713 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 05:03:16.961790   21713 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 05:03:16.963818   21713 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 05:03:16.963833   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 05:03:16.991168   21713 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 05:03:16.991182   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 05:03:17.222543   21713 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 05:03:17.222583   21713 cache_images.go:92] duration metric: took 2.913173334s to LoadCachedImages
	W0318 05:03:17.222627   21713 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0318 05:03:17.222633   21713 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 05:03:17.222693   21713 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 05:03:17.222758   21713 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 05:03:17.235969   21713 cni.go:84] Creating CNI manager for ""
	I0318 05:03:17.235981   21713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:03:17.235985   21713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 05:03:17.235994   21713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-211000 NodeName:stopped-upgrade-211000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 05:03:17.236060   21713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-211000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 05:03:17.236121   21713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 05:03:17.238994   21713 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 05:03:17.239023   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 05:03:17.241925   21713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 05:03:17.246688   21713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 05:03:17.251503   21713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 05:03:17.257144   21713 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 05:03:17.258373   21713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 05:03:17.262111   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:03:17.350115   21713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:03:17.355733   21713 certs.go:68] Setting up /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000 for IP: 10.0.2.15
	I0318 05:03:17.355740   21713 certs.go:194] generating shared ca certs ...
	I0318 05:03:17.355748   21713 certs.go:226] acquiring lock for ca certs: {Name:mk67337f74312fe6750257c43ce98e6fa0b5d738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.355981   21713 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key
	I0318 05:03:17.356018   21713 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key
	I0318 05:03:17.356024   21713 certs.go:256] generating profile certs ...
	I0318 05:03:17.356080   21713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key
	I0318 05:03:17.356097   21713 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c
	I0318 05:03:17.356108   21713 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 05:03:17.420724   21713 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c ...
	I0318 05:03:17.420734   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c: {Name:mk89c7cbcc3e59aca651554e0dcc4a0f6b744ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.421007   21713 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c ...
	I0318 05:03:17.421013   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c: {Name:mk3e238a5423a92cece846889e751e8c55965fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.421143   21713 certs.go:381] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt
	I0318 05:03:17.421270   21713 certs.go:385] copying /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c -> /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key
	I0318 05:03:17.421399   21713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.key
	I0318 05:03:17.421519   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem (1338 bytes)
	W0318 05:03:17.421538   21713 certs.go:480] ignoring /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926_empty.pem, impossibly tiny 0 bytes
	I0318 05:03:17.421543   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 05:03:17.421565   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem (1078 bytes)
	I0318 05:03:17.421582   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem (1123 bytes)
	I0318 05:03:17.421600   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/key.pem (1679 bytes)
	I0318 05:03:17.421640   21713 certs.go:484] found cert: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem (1708 bytes)
	I0318 05:03:17.421985   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 05:03:17.428961   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 05:03:17.435505   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 05:03:17.442703   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 05:03:17.449924   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 05:03:17.457261   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 05:03:17.463697   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 05:03:17.470517   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 05:03:17.477696   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/ssl/certs/199262.pem --> /usr/share/ca-certificates/199262.pem (1708 bytes)
	I0318 05:03:17.484423   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 05:03:17.490978   21713 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/19926.pem --> /usr/share/ca-certificates/19926.pem (1338 bytes)
	I0318 05:03:17.498028   21713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 05:03:17.503502   21713 ssh_runner.go:195] Run: openssl version
	I0318 05:03:17.505311   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199262.pem && ln -fs /usr/share/ca-certificates/199262.pem /etc/ssl/certs/199262.pem"
	I0318 05:03:17.508236   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.509561   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:50 /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.509579   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199262.pem
	I0318 05:03:17.511361   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199262.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 05:03:17.514716   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 05:03:17.518169   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.519707   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:02 /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.519725   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 05:03:17.521490   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 05:03:17.524429   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19926.pem && ln -fs /usr/share/ca-certificates/19926.pem /etc/ssl/certs/19926.pem"
	I0318 05:03:17.527324   21713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.528802   21713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:50 /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.528823   21713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19926.pem
	I0318 05:03:17.530568   21713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19926.pem /etc/ssl/certs/51391683.0"
	I0318 05:03:17.533892   21713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 05:03:17.535336   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 05:03:17.537441   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 05:03:17.539399   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 05:03:17.541686   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 05:03:17.543547   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 05:03:17.545364   21713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 05:03:17.547132   21713 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 05:03:17.547195   21713 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:17.557207   21713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 05:03:17.560176   21713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 05:03:17.560182   21713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 05:03:17.560188   21713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 05:03:17.560209   21713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 05:03:17.562881   21713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 05:03:17.562912   21713 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-211000" does not appear in /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:03:17.562926   21713 kubeconfig.go:62] /Users/jenkins/minikube-integration/18427-19517/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-211000" cluster setting kubeconfig missing "stopped-upgrade-211000" context setting]
	I0318 05:03:17.563098   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:03:17.563759   21713 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10656aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:03:17.564558   21713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 05:03:17.567147   21713 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-211000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 05:03:17.567153   21713 kubeadm.go:1154] stopping kube-system containers ...
	I0318 05:03:17.567200   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 05:03:17.577864   21713 docker.go:483] Stopping containers: [c9608635c8f8 5f01a1c185ea 221a0b4b0ae5 faf1fd770eea f7ba78c6046c 64f4772f7d6b 8dac42bbc563 deb6e2882c0c]
	I0318 05:03:17.577936   21713 ssh_runner.go:195] Run: docker stop c9608635c8f8 5f01a1c185ea 221a0b4b0ae5 faf1fd770eea f7ba78c6046c 64f4772f7d6b 8dac42bbc563 deb6e2882c0c
	I0318 05:03:17.589578   21713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 05:03:17.594867   21713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:03:17.597951   21713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:03:17.597956   21713 kubeadm.go:156] found existing configuration files:
	
	I0318 05:03:17.597976   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf
	I0318 05:03:17.600479   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:03:17.600500   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:03:17.603060   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf
	I0318 05:03:17.606117   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:03:17.606140   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:03:17.608848   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf
	I0318 05:03:17.611164   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:03:17.611184   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:03:17.614145   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf
	I0318 05:03:17.616941   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:03:17.616963   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:03:17.619264   21713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:03:17.622154   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:17.644538   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.184240   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.318866   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.347300   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 05:03:18.369434   21713 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:03:18.369525   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:18.871556   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:19.371555   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:03:19.376351   21713 api_server.go:72] duration metric: took 1.006952083s to wait for apiserver process to appear ...
	I0318 05:03:19.376360   21713 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:03:19.376387   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:24.378365   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:24.378394   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:29.378456   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:29.378491   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:34.378591   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:34.378611   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:39.378828   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:39.378895   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:44.379309   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:44.379379   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:49.380369   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:49.380460   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:54.381749   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:54.381802   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:03:59.383210   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:03:59.383251   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:04.385006   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:04.385058   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:09.387248   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:09.387291   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:14.389521   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:14.389594   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:19.390565   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:19.390848   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:19.418825   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:19.418958   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:19.437989   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:19.438085   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:19.451284   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:19.451360   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:19.462515   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:19.462592   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:19.472537   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:19.472606   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:19.483688   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:19.483756   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:19.493898   21713 logs.go:276] 0 containers: []
	W0318 05:04:19.493909   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:19.493977   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:19.504609   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:19.504636   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:19.504643   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:19.518926   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:19.518938   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:19.538104   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:19.538116   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:19.549258   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:19.549267   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:19.564613   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:19.564625   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:19.602107   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:19.602120   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:19.606051   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:19.606060   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:19.624113   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:19.624123   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:19.641990   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:19.642003   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:19.655755   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:19.655768   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:19.667884   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:19.667896   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:19.679329   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:19.679340   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:19.694608   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:19.694619   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:19.706293   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:19.706307   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:19.820611   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:19.820625   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:19.848707   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:19.848718   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:19.861062   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:19.861073   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:22.388258   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:27.389906   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:27.390284   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:27.429529   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:27.429665   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:27.451715   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:27.451818   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:27.466420   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:27.466500   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:27.478784   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:27.478861   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:27.492495   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:27.492574   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:27.503351   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:27.503424   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:27.514052   21713 logs.go:276] 0 containers: []
	W0318 05:04:27.514062   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:27.514122   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:27.525007   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:27.525042   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:27.525047   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:27.551534   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:27.551554   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:27.567027   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:27.567043   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:27.586784   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:27.586794   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:27.611875   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:27.611885   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:27.616072   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:27.616078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:27.629503   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:27.629520   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:27.641408   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:27.641422   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:27.659687   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:27.659706   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:27.671040   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:27.671051   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:27.709008   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:27.709021   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:27.723539   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:27.723552   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:27.736034   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:27.736044   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:27.750304   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:27.750315   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:27.761836   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:27.761851   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:27.775240   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:27.775255   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:27.787117   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:27.787129   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:30.328829   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:35.331420   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:35.331679   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:35.351656   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:35.351737   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:35.362892   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:35.362973   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:35.373768   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:35.373849   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:35.384768   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:35.384849   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:35.395278   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:35.395343   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:35.412018   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:35.412083   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:35.429443   21713 logs.go:276] 0 containers: []
	W0318 05:04:35.429457   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:35.429515   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:35.446901   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:35.446921   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:35.446927   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:35.472716   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:35.472728   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:35.487311   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:35.487326   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:35.502488   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:35.502500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:35.520017   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:35.520028   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:35.531667   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:35.531681   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:35.542829   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:35.542840   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:35.547312   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:35.547322   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:35.561971   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:35.561982   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:35.574218   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:35.574231   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:35.592630   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:35.592641   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:35.629534   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:35.629542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:35.665442   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:35.665454   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:35.679227   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:35.679237   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:35.691247   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:35.691261   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:35.703167   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:35.703178   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:35.716568   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:35.716578   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:38.242295   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:43.244502   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:43.244701   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:43.268247   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:43.268344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:43.283299   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:43.283375   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:43.295157   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:43.295238   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:43.307620   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:43.307698   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:43.318488   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:43.318586   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:43.329095   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:43.329169   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:43.339422   21713 logs.go:276] 0 containers: []
	W0318 05:04:43.339435   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:43.339497   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:43.349532   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:43.349553   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:43.349559   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:43.391496   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:43.391507   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:43.404978   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:43.404988   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:43.430699   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:43.430707   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:43.468987   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:43.468997   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:43.493883   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:43.493893   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:43.508880   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:43.508890   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:43.524182   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:43.524194   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:43.535432   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:43.535452   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:43.546852   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:43.546864   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:43.560793   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:43.560804   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:43.571973   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:43.571985   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:43.583548   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:43.583559   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:43.594737   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:43.594748   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:43.609144   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:43.609154   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:43.626895   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:43.626908   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:43.638946   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:43.638956   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:46.144953   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:51.147145   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:51.147343   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:51.162798   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:51.162890   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:51.179184   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:51.179255   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:51.193957   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:51.194023   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:51.211934   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:51.212011   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:51.222115   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:51.222180   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:51.232256   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:51.232324   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:51.242168   21713 logs.go:276] 0 containers: []
	W0318 05:04:51.242179   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:51.242230   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:51.252561   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:51.252578   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:51.252583   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:51.266780   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:51.266797   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:51.291979   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:51.291992   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:51.303841   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:51.303852   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:51.318911   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:51.318924   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:51.323010   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:51.323017   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:51.337678   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:51.337691   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:51.349098   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:51.349111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:04:51.369644   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:51.369655   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:51.381175   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:51.381189   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:51.417509   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:51.417523   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:51.440591   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:51.440598   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:51.455493   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:51.455504   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:51.469278   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:51.469290   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:51.483559   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:51.483571   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:51.501199   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:51.501210   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:51.513224   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:51.513241   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:54.053467   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:04:59.055622   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:04:59.055911   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:04:59.078766   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:04:59.078889   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:04:59.094465   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:04:59.094547   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:04:59.107448   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:04:59.107529   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:04:59.118145   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:04:59.118215   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:04:59.128915   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:04:59.128990   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:04:59.139280   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:04:59.139344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:04:59.149125   21713 logs.go:276] 0 containers: []
	W0318 05:04:59.149137   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:04:59.149196   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:04:59.159813   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:04:59.159832   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:04:59.159838   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:04:59.175321   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:04:59.175332   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:04:59.179937   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:04:59.179947   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:04:59.191313   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:04:59.191323   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:04:59.207276   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:04:59.207289   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:04:59.217938   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:04:59.217950   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:04:59.230901   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:04:59.230912   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:04:59.266178   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:04:59.266191   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:04:59.279872   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:04:59.279886   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:04:59.294845   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:04:59.294856   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:04:59.306870   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:04:59.306883   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:04:59.330057   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:04:59.330064   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:04:59.366868   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:04:59.366876   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:04:59.394878   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:04:59.394889   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:04:59.406233   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:04:59.406248   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:04:59.417850   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:04:59.417862   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:04:59.435049   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:04:59.435058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:01.950640   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:06.952855   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:06.953136   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:06.978960   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:06.979085   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:06.995469   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:06.995569   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:07.009036   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:07.009104   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:07.037112   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:07.037189   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:07.047696   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:07.047765   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:07.058343   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:07.058416   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:07.068892   21713 logs.go:276] 0 containers: []
	W0318 05:05:07.068903   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:07.068962   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:07.079668   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:07.079687   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:07.079695   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:07.084069   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:07.084078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:07.109052   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:07.109063   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:07.123863   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:07.123876   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:07.135805   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:07.135820   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:07.151983   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:07.151995   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:07.163914   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:07.163926   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:07.175923   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:07.175934   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:07.213647   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:07.213662   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:07.228238   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:07.228247   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:07.239971   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:07.239986   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:07.258188   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:07.258203   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:07.274153   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:07.274168   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:07.296993   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:07.297004   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:07.333684   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:07.333692   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:07.345925   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:07.345939   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:07.366732   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:07.366743   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:09.881853   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:14.883814   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:14.884203   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:14.920095   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:14.920239   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:14.946185   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:14.946273   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:14.959853   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:14.959934   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:14.971379   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:14.971460   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:14.982037   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:14.982108   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:14.992619   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:14.992685   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:15.003198   21713 logs.go:276] 0 containers: []
	W0318 05:05:15.003209   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:15.003271   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:15.015100   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:15.015118   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:15.015124   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:15.019775   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:15.019783   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:15.033768   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:15.033780   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:15.045298   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:15.045309   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:15.056884   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:15.056896   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:15.068213   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:15.068226   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:15.094866   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:15.094881   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:15.120580   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:15.120591   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:15.155761   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:15.155772   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:15.169967   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:15.169981   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:15.185181   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:15.185194   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:15.203346   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:15.203356   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:15.241786   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:15.241795   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:15.255710   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:15.255721   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:15.266964   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:15.266975   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:15.280610   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:15.280621   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:15.292887   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:15.292897   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:17.805979   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:22.811605   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:22.811762   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:22.823723   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:22.823796   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:22.834903   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:22.834974   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:22.844828   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:22.844905   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:22.857655   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:22.857726   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:22.867826   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:22.867899   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:22.878330   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:22.878413   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:22.888924   21713 logs.go:276] 0 containers: []
	W0318 05:05:22.888937   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:22.888995   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:22.899759   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:22.899782   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:22.899788   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:22.936945   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:22.936952   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:22.961881   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:22.961892   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:22.975779   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:22.975792   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:22.986903   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:22.986917   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:23.000685   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:23.000695   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:23.012037   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:23.012049   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:23.016522   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:23.016528   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:23.052447   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:23.052460   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:23.066807   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:23.066818   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:23.084313   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:23.084323   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:23.105053   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:23.105067   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:23.116649   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:23.116661   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:23.132360   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:23.132372   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:23.144178   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:23.144191   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:23.158880   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:23.158891   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:23.175351   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:23.175363   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:25.701331   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:30.703546   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:30.703923   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:30.740641   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:30.740784   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:30.758842   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:30.758933   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:30.773276   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:30.773349   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:30.786043   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:30.786127   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:30.796340   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:30.796417   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:30.807536   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:30.807622   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:30.818470   21713 logs.go:276] 0 containers: []
	W0318 05:05:30.818482   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:30.818541   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:30.833067   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:30.833087   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:30.833092   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:30.868946   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:30.868957   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:30.880708   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:30.880720   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:30.892559   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:30.892569   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:30.910059   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:30.910069   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:30.934833   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:30.934846   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:30.949728   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:30.949739   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:30.963775   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:30.963786   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:30.981741   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:30.981755   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:30.996303   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:30.996314   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:31.007646   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:31.007655   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:31.020230   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:31.020241   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:31.058239   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:31.058247   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:31.062052   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:31.062058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:31.076014   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:31.076026   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:31.087068   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:31.087079   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:31.111755   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:31.111771   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:33.625835   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:38.627216   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:38.627364   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:38.647261   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:38.647373   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:38.660870   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:38.660939   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:38.671336   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:38.671402   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:38.681583   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:38.681655   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:38.692233   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:38.692301   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:38.702873   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:38.702948   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:38.712973   21713 logs.go:276] 0 containers: []
	W0318 05:05:38.712983   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:38.713035   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:38.723845   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:38.723864   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:38.723871   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:38.761803   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:38.761814   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:38.773355   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:38.773367   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:38.797564   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:38.797574   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:38.822041   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:38.822054   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:38.836253   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:38.836264   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:38.853395   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:38.853407   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:38.864815   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:38.864825   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:38.882532   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:38.882542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:38.894243   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:38.894253   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:38.907951   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:38.907962   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:38.923802   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:38.923814   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:38.960681   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:38.960690   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:38.964687   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:38.964694   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:38.978659   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:38.978670   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:38.992105   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:38.992114   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:39.003463   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:39.003478   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:41.524659   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:46.526914   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:46.527070   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:46.543797   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:46.543893   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:46.556558   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:46.556635   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:46.566907   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:46.566977   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:46.577625   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:46.577700   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:46.587605   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:46.587680   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:46.598604   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:46.598673   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:46.608666   21713 logs.go:276] 0 containers: []
	W0318 05:05:46.608676   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:46.608734   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:46.619461   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:46.619479   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:46.619485   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:46.634442   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:46.634453   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:46.646856   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:46.646867   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:46.661895   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:46.661908   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:46.680697   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:46.680710   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:46.692122   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:46.692133   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:46.703601   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:46.703613   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:46.742440   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:46.742458   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:46.746435   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:46.746441   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:46.760396   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:46.760407   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:46.784110   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:46.784124   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:46.795256   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:46.795269   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:46.813082   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:46.813093   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:46.826078   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:46.826087   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:46.849899   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:46.849907   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:46.885076   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:46.885087   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:46.899306   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:46.899316   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:49.413049   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:05:54.415179   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:05:54.415284   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:05:54.427488   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:05:54.427564   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:05:54.439708   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:05:54.439775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:05:54.449921   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:05:54.449995   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:05:54.460516   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:05:54.460591   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:05:54.473729   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:05:54.473806   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:05:54.484232   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:05:54.484308   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:05:54.494518   21713 logs.go:276] 0 containers: []
	W0318 05:05:54.494531   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:05:54.494591   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:05:54.504923   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:05:54.504942   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:05:54.504946   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:05:54.516433   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:05:54.516447   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:05:54.551554   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:05:54.551570   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:05:54.566046   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:05:54.566057   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:05:54.584006   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:05:54.584020   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:05:54.595747   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:05:54.595759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:05:54.619115   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:05:54.619124   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:05:54.630386   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:05:54.630397   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:05:54.668403   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:05:54.668419   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:05:54.682855   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:05:54.682866   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:05:54.707681   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:05:54.707692   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:05:54.720024   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:05:54.720037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:05:54.736202   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:05:54.736215   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:05:54.740385   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:05:54.740393   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:05:54.754429   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:05:54.754441   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:05:54.769066   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:05:54.769075   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:05:54.779878   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:05:54.779889   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:05:57.293656   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:02.295836   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:02.296175   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:02.322850   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:02.322975   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:02.340619   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:02.340697   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:02.353938   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:02.354003   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:02.365822   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:02.365892   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:02.376390   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:02.376458   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:02.387376   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:02.387441   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:02.397747   21713 logs.go:276] 0 containers: []
	W0318 05:06:02.397757   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:02.397814   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:02.408330   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:02.408349   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:02.408354   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:02.422146   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:02.422155   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:02.433096   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:02.433107   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:02.482325   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:02.482340   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:02.511741   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:02.511751   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:02.529351   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:02.529361   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:02.553577   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:02.553591   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:02.558037   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:02.558044   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:02.573258   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:02.573272   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:02.586376   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:02.586391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:02.597184   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:02.597198   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:02.608248   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:02.608262   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:02.620187   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:02.620201   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:02.647284   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:02.647297   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:02.661434   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:02.661450   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:02.680590   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:02.680603   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:02.719088   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:02.719101   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:05.233123   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:10.235218   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:10.235386   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:10.245751   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:10.245832   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:10.256979   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:10.257051   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:10.266988   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:10.267056   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:10.277941   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:10.278017   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:10.288110   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:10.288177   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:10.298731   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:10.298800   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:10.309180   21713 logs.go:276] 0 containers: []
	W0318 05:06:10.309193   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:10.309249   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:10.319177   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:10.319197   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:10.319202   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:10.343560   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:10.343570   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:10.361145   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:10.361157   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:10.372998   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:10.373010   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:10.384952   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:10.384969   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:10.389114   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:10.389122   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:10.403350   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:10.403361   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:10.422459   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:10.422469   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:10.439996   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:10.440007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:10.463456   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:10.463463   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:10.498611   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:10.498621   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:10.513529   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:10.513541   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:10.526686   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:10.526699   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:10.543499   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:10.543510   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:10.555418   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:10.555430   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:10.594035   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:10.594046   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:10.608729   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:10.608741   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:13.122613   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:18.124651   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:18.124916   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:18.161350   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:18.161450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:18.176575   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:18.176653   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:18.189055   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:18.189129   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:18.201079   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:18.201164   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:18.213628   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:18.213704   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:18.224206   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:18.224275   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:18.240293   21713 logs.go:276] 0 containers: []
	W0318 05:06:18.240305   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:18.240367   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:18.250431   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:18.250448   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:18.250453   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:18.261647   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:18.261656   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:18.278433   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:18.278446   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:18.290064   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:18.290078   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:18.324964   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:18.324976   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:18.336389   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:18.336400   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:18.349675   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:18.349687   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:18.361049   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:18.361060   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:18.397481   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:18.397490   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:18.411269   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:18.411281   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:18.425599   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:18.425609   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:18.437170   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:18.437179   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:18.452000   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:18.452011   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:18.456089   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:18.456095   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:18.469811   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:18.469824   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:18.494315   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:18.494327   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:18.505803   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:18.505813   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:21.028685   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:26.030891   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:26.031264   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:26.070473   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:26.070619   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:26.090965   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:26.091069   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:26.106879   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:26.106976   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:26.119422   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:26.119495   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:26.130367   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:26.130431   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:26.146006   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:26.146073   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:26.156409   21713 logs.go:276] 0 containers: []
	W0318 05:06:26.156422   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:26.156484   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:26.166782   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:26.166798   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:26.166804   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:26.178521   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:26.178531   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:26.202588   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:26.202596   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:26.214687   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:26.214698   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:26.253364   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:26.253373   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:26.267236   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:26.267248   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:26.281856   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:26.281868   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:26.295472   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:26.295485   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:26.310978   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:26.310988   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:26.328240   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:26.328251   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:26.341960   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:26.341971   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:26.353021   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:26.353034   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:26.357305   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:26.357312   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:26.385668   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:26.385678   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:26.400680   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:26.400691   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:26.441958   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:26.441971   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:26.456609   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:26.456619   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:28.969434   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:33.971079   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:33.971298   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:33.992349   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:33.992450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:34.007266   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:34.007345   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:34.019257   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:34.019323   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:34.029854   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:34.029927   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:34.044272   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:34.044345   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:34.055346   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:34.055408   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:34.065457   21713 logs.go:276] 0 containers: []
	W0318 05:06:34.065508   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:34.065575   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:34.076124   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:34.076141   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:34.076146   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:34.093532   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:34.093548   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:34.105226   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:34.105237   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:34.116867   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:34.116878   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:34.129025   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:34.129037   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:34.133283   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:34.133291   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:34.166718   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:34.166727   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:34.181109   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:34.181119   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:34.192488   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:34.192500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:34.209045   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:34.209058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:34.220838   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:34.220847   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:34.245744   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:34.245751   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:34.258019   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:34.258032   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:34.272033   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:34.272047   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:34.311251   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:34.311267   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:34.348011   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:34.348025   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:34.362463   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:34.362480   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:36.887577   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:41.888635   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:41.888823   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:41.904699   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:41.904774   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:41.919820   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:41.919897   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:41.931477   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:41.931550   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:41.941914   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:41.941985   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:41.952367   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:41.952440   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:41.963017   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:41.963089   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:41.977759   21713 logs.go:276] 0 containers: []
	W0318 05:06:41.977774   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:41.977830   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:41.988210   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:41.988229   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:41.988234   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:42.002479   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:42.002493   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:42.014087   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:42.014098   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:42.035858   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:42.035866   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:42.048617   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:42.048629   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:42.053258   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:42.053266   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:42.067316   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:42.067327   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:42.083361   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:42.083374   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:42.101359   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:42.101371   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:42.113569   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:42.113580   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:42.151025   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:42.151033   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:42.175683   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:42.175695   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:42.187230   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:42.187241   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:42.198778   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:42.198792   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:42.212192   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:42.212205   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:42.247835   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:42.247861   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:42.262318   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:42.262331   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:44.777434   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:49.778559   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:49.778806   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:49.807670   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:49.807781   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:49.826965   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:49.827047   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:49.840504   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:49.840574   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:49.852001   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:49.852074   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:49.862379   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:49.862450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:49.872952   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:49.873023   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:49.882876   21713 logs.go:276] 0 containers: []
	W0318 05:06:49.882890   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:49.882947   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:49.895456   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:49.895473   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:49.895478   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:49.908790   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:49.908802   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:06:49.923944   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:49.923958   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:49.937608   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:49.937622   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:49.959373   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:49.959383   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:49.971037   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:49.971050   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:49.981721   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:49.981732   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:49.985997   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:49.986005   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:49.997532   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:49.997544   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:50.009265   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:50.009280   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:50.026483   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:50.026494   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:50.038115   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:50.038127   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:50.076654   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:50.076664   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:50.112578   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:50.112594   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:50.145688   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:50.145699   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:50.160526   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:50.160540   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:50.174767   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:50.174783   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:52.688426   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:06:57.690486   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:06:57.690811   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:06:57.722647   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:06:57.722788   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:06:57.748075   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:06:57.748159   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:06:57.761524   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:06:57.761596   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:06:57.774681   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:06:57.774754   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:06:57.785636   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:06:57.785699   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:06:57.796168   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:06:57.796242   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:06:57.806700   21713 logs.go:276] 0 containers: []
	W0318 05:06:57.806712   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:06:57.806775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:06:57.816900   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:06:57.816919   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:06:57.816924   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:06:57.839296   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:06:57.839309   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:06:57.851064   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:06:57.851078   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:06:57.876589   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:06:57.876602   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:06:57.890752   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:06:57.890765   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:06:57.901642   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:06:57.901654   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:06:57.914906   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:06:57.914919   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:06:57.926628   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:06:57.926638   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:06:57.930524   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:06:57.930533   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:06:57.944647   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:06:57.944658   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:06:57.959272   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:06:57.959284   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:06:57.970737   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:06:57.970748   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:06:57.982307   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:06:57.982321   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:06:57.998863   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:06:57.998875   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:06:58.022424   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:06:58.022433   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:06:58.059403   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:06:58.059413   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:06:58.094456   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:06:58.094467   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:00.610653   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:05.612616   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:05.612888   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:05.638106   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:07:05.638237   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:05.655470   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:07:05.655560   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:05.669006   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:07:05.669079   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:05.680413   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:07:05.680489   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:05.692422   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:07:05.692498   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:05.703459   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:07:05.703527   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:05.713765   21713 logs.go:276] 0 containers: []
	W0318 05:07:05.713777   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:05.713837   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:05.724209   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:07:05.724228   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:05.724234   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:05.730775   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:07:05.730784   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:07:05.744999   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:07:05.745013   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:07:05.769900   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:07:05.769911   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:07:05.781746   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:07:05.781756   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:07:05.799120   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:07:05.799131   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:05.817348   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:05.817360   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:05.857686   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:07:05.857709   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:07:05.876331   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:07:05.876343   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:07:05.894849   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:05.894861   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:05.934471   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:07:05.934506   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:07:05.945835   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:07:05.945853   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:07:05.957756   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:07:05.957767   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:05.976834   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:07:05.976843   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:07:05.991188   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:07:05.991197   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:07:06.005197   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:07:06.005208   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:07:06.018407   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:06.018418   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:08.540133   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:13.542254   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:13.542670   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:07:13.584658   21713 logs.go:276] 2 containers: [ec8d6ef7a1b8 221a0b4b0ae5]
	I0318 05:07:13.584812   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:07:13.607002   21713 logs.go:276] 2 containers: [b3c9a550f473 c9608635c8f8]
	I0318 05:07:13.607116   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:07:13.621539   21713 logs.go:276] 1 containers: [d2e6354f4cd0]
	I0318 05:07:13.621613   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:07:13.634416   21713 logs.go:276] 2 containers: [f17381de79b9 faf1fd770eea]
	I0318 05:07:13.634488   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:07:13.645261   21713 logs.go:276] 1 containers: [0309d5ddb06e]
	I0318 05:07:13.645328   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:07:13.655660   21713 logs.go:276] 2 containers: [e8923157be93 5f01a1c185ea]
	I0318 05:07:13.655729   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:07:13.672034   21713 logs.go:276] 0 containers: []
	W0318 05:07:13.672049   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:07:13.672107   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:07:13.682773   21713 logs.go:276] 2 containers: [877136b5d9e0 f812c6602c7c]
	I0318 05:07:13.682795   21713 logs.go:123] Gathering logs for storage-provisioner [f812c6602c7c] ...
	I0318 05:07:13.682800   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f812c6602c7c"
	I0318 05:07:13.697846   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:07:13.697860   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:07:13.733982   21713 logs.go:123] Gathering logs for kube-apiserver [221a0b4b0ae5] ...
	I0318 05:07:13.733993   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a0b4b0ae5"
	I0318 05:07:13.761176   21713 logs.go:123] Gathering logs for kube-proxy [0309d5ddb06e] ...
	I0318 05:07:13.761187   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0309d5ddb06e"
	I0318 05:07:13.772900   21713 logs.go:123] Gathering logs for kube-controller-manager [e8923157be93] ...
	I0318 05:07:13.772915   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8923157be93"
	I0318 05:07:13.791527   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:07:13.791540   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:07:13.803177   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:07:13.803190   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:07:13.807709   21713 logs.go:123] Gathering logs for kube-apiserver [ec8d6ef7a1b8] ...
	I0318 05:07:13.807718   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8d6ef7a1b8"
	I0318 05:07:13.821598   21713 logs.go:123] Gathering logs for etcd [b3c9a550f473] ...
	I0318 05:07:13.821612   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c9a550f473"
	I0318 05:07:13.840288   21713 logs.go:123] Gathering logs for kube-scheduler [f17381de79b9] ...
	I0318 05:07:13.840301   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f17381de79b9"
	I0318 05:07:13.852603   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:07:13.852616   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:07:13.876935   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:07:13.876946   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 05:07:13.916017   21713 logs.go:123] Gathering logs for etcd [c9608635c8f8] ...
	I0318 05:07:13.916037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9608635c8f8"
	I0318 05:07:13.930684   21713 logs.go:123] Gathering logs for coredns [d2e6354f4cd0] ...
	I0318 05:07:13.930698   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e6354f4cd0"
	I0318 05:07:13.941980   21713 logs.go:123] Gathering logs for kube-scheduler [faf1fd770eea] ...
	I0318 05:07:13.941990   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1fd770eea"
	I0318 05:07:13.957122   21713 logs.go:123] Gathering logs for kube-controller-manager [5f01a1c185ea] ...
	I0318 05:07:13.957132   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f01a1c185ea"
	I0318 05:07:13.970671   21713 logs.go:123] Gathering logs for storage-provisioner [877136b5d9e0] ...
	I0318 05:07:13.970685   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 877136b5d9e0"
	I0318 05:07:16.483926   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:21.484665   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:21.484804   21713 kubeadm.go:591] duration metric: took 4m3.932349459s to restartPrimaryControlPlane
	W0318 05:07:21.484970   21713 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 05:07:21.485032   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 05:07:22.515182   21713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030168042s)
	I0318 05:07:22.515251   21713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 05:07:22.520491   21713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 05:07:22.523476   21713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 05:07:22.526385   21713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 05:07:22.526391   21713 kubeadm.go:156] found existing configuration files:
	
	I0318 05:07:22.526417   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf
	I0318 05:07:22.528857   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 05:07:22.528881   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 05:07:22.531603   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf
	I0318 05:07:22.534497   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 05:07:22.534519   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 05:07:22.537095   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf
	I0318 05:07:22.539699   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 05:07:22.539720   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 05:07:22.542891   21713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf
	I0318 05:07:22.545772   21713 kubeadm.go:162] "https://control-plane.minikube.internal:54310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 05:07:22.545794   21713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 05:07:22.548264   21713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 05:07:22.565193   21713 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 05:07:22.565226   21713 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 05:07:22.616035   21713 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 05:07:22.616093   21713 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 05:07:22.616139   21713 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 05:07:22.664530   21713 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 05:07:22.668715   21713 out.go:204]   - Generating certificates and keys ...
	I0318 05:07:22.668751   21713 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 05:07:22.668785   21713 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 05:07:22.668822   21713 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 05:07:22.668852   21713 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 05:07:22.668886   21713 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 05:07:22.668916   21713 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 05:07:22.668950   21713 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 05:07:22.668985   21713 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 05:07:22.669034   21713 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 05:07:22.669087   21713 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 05:07:22.669106   21713 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 05:07:22.669156   21713 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 05:07:22.783021   21713 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 05:07:22.843599   21713 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 05:07:23.087842   21713 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 05:07:23.188107   21713 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 05:07:23.217640   21713 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 05:07:23.218039   21713 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 05:07:23.218089   21713 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 05:07:23.304008   21713 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 05:07:23.307244   21713 out.go:204]   - Booting up control plane ...
	I0318 05:07:23.307291   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 05:07:23.307335   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 05:07:23.307372   21713 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 05:07:23.307419   21713 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 05:07:23.307548   21713 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 05:07:28.311017   21713 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.006590 seconds
	I0318 05:07:28.311336   21713 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 05:07:28.322018   21713 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 05:07:28.834451   21713 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 05:07:28.834583   21713 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-211000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 05:07:29.339653   21713 kubeadm.go:309] [bootstrap-token] Using token: zzghot.6ejp1jln0cyhdi5r
	I0318 05:07:29.343508   21713 out.go:204]   - Configuring RBAC rules ...
	I0318 05:07:29.343579   21713 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 05:07:29.343659   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 05:07:29.351009   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 05:07:29.352389   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 05:07:29.353413   21713 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 05:07:29.355738   21713 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 05:07:29.358904   21713 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 05:07:29.542155   21713 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 05:07:29.743884   21713 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 05:07:29.744359   21713 kubeadm.go:309] 
	I0318 05:07:29.744388   21713 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 05:07:29.744396   21713 kubeadm.go:309] 
	I0318 05:07:29.744439   21713 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 05:07:29.744443   21713 kubeadm.go:309] 
	I0318 05:07:29.744454   21713 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 05:07:29.744481   21713 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 05:07:29.744511   21713 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 05:07:29.744515   21713 kubeadm.go:309] 
	I0318 05:07:29.744544   21713 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 05:07:29.744547   21713 kubeadm.go:309] 
	I0318 05:07:29.744573   21713 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 05:07:29.744577   21713 kubeadm.go:309] 
	I0318 05:07:29.744603   21713 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 05:07:29.744642   21713 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 05:07:29.744685   21713 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 05:07:29.744688   21713 kubeadm.go:309] 
	I0318 05:07:29.744727   21713 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 05:07:29.744771   21713 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 05:07:29.744774   21713 kubeadm.go:309] 
	I0318 05:07:29.744818   21713 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zzghot.6ejp1jln0cyhdi5r \
	I0318 05:07:29.744882   21713 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f \
	I0318 05:07:29.744892   21713 kubeadm.go:309] 	--control-plane 
	I0318 05:07:29.744896   21713 kubeadm.go:309] 
	I0318 05:07:29.744939   21713 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 05:07:29.744942   21713 kubeadm.go:309] 
	I0318 05:07:29.744988   21713 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zzghot.6ejp1jln0cyhdi5r \
	I0318 05:07:29.745050   21713 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2c4297b91ace817e0fb1c32526c2ad664eb333850689868816794ba1e9dfc07f 
	I0318 05:07:29.745200   21713 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 05:07:29.745252   21713 cni.go:84] Creating CNI manager for ""
	I0318 05:07:29.745260   21713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:07:29.749249   21713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 05:07:29.756227   21713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 05:07:29.759604   21713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 05:07:29.764759   21713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 05:07:29.764793   21713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 05:07:29.764841   21713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-211000 minikube.k8s.io/updated_at=2024_03_18T05_07_29_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=2fcca2f5df154fc6e65b455801f87bc0777140de minikube.k8s.io/name=stopped-upgrade-211000 minikube.k8s.io/primary=true
	I0318 05:07:29.808156   21713 ops.go:34] apiserver oom_adj: -16
	I0318 05:07:29.813993   21713 kubeadm.go:1107] duration metric: took 49.230958ms to wait for elevateKubeSystemPrivileges
	W0318 05:07:29.814013   21713 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 05:07:29.814017   21713 kubeadm.go:393] duration metric: took 4m12.274897s to StartCluster
	I0318 05:07:29.814026   21713 settings.go:142] acquiring lock: {Name:mkc727ca725e75d24ce65050e373ec9e186fcd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:29.814173   21713 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:07:29.814545   21713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/kubeconfig: {Name:mke65151970e01af41afaa654a36ecdb221d1a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:07:29.814747   21713 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:07:29.819168   21713 out.go:177] * Verifying Kubernetes components...
	I0318 05:07:29.814898   21713 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 05:07:29.814864   21713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 05:07:29.827119   21713 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-211000"
	I0318 05:07:29.827133   21713 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-211000"
	W0318 05:07:29.827136   21713 addons.go:243] addon storage-provisioner should already be in state true
	I0318 05:07:29.827159   21713 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0318 05:07:29.827180   21713 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-211000"
	I0318 05:07:29.827192   21713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 05:07:29.827194   21713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-211000"
	I0318 05:07:29.827672   21713 retry.go:31] will retry after 1.430317702s: connect: dial unix /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/monitor: connect: connection refused
	I0318 05:07:29.833210   21713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 05:07:29.837214   21713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:29.837221   21713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 05:07:29.837228   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:07:29.906401   21713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 05:07:29.911468   21713 api_server.go:52] waiting for apiserver process to appear ...
	I0318 05:07:29.911510   21713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 05:07:29.915458   21713 api_server.go:72] duration metric: took 100.704333ms to wait for apiserver process to appear ...
	I0318 05:07:29.915465   21713 api_server.go:88] waiting for apiserver healthz status ...
	I0318 05:07:29.915472   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:29.924168   21713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 05:07:31.261009   21713 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/18427-19517/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10656aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 05:07:31.261136   21713 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-211000"
	W0318 05:07:31.261142   21713 addons.go:243] addon default-storageclass should already be in state true
	I0318 05:07:31.261154   21713 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0318 05:07:31.261867   21713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:31.261873   21713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 05:07:31.261879   21713 sshutil.go:53] new ssh client: &{IP:localhost Port:54278 SSHKeyPath:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0318 05:07:31.299489   21713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 05:07:34.917400   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:34.917420   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:39.917450   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:39.917475   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:44.917604   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:44.917631   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:49.917847   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:49.917893   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:54.918235   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:54.918291   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:07:59.918886   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:07:59.918916   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 05:08:01.350791   21713 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 05:08:01.354672   21713 out.go:177] * Enabled addons: storage-provisioner
	I0318 05:08:01.362510   21713 addons.go:505] duration metric: took 31.548730459s for enable addons: enabled=[storage-provisioner]
	I0318 05:08:04.919864   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:04.919886   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:09.920816   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:09.920849   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:14.922085   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:14.922114   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:19.923657   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:19.923682   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:24.925629   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:24.925649   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:29.927625   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:29.927721   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:29.938276   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:08:29.938344   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:29.948503   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:08:29.948575   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:29.958717   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:08:29.958781   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:29.968426   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:08:29.968492   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:29.978685   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:08:29.978754   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:29.988589   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:08:29.988661   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:29.998984   21713 logs.go:276] 0 containers: []
	W0318 05:08:29.998995   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:29.999057   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:30.009936   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:08:30.009951   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:08:30.009956   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:08:30.021521   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:30.021532   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:30.026487   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:08:30.026494   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:08:30.040394   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:08:30.040404   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:08:30.054820   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:08:30.054830   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:08:30.066859   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:08:30.066869   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:08:30.088250   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:30.088261   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:30.112930   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:08:30.112937   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:30.124212   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:30.124224   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:08:30.159600   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:30.159692   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:30.161700   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:30.161705   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:30.197214   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:08:30.197225   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:08:30.210826   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:08:30.210836   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:08:30.226770   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:08:30.226780   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:08:30.238215   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:30.238225   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:08:30.238252   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:08:30.238256   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:30.238261   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:30.238266   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:30.238269   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:08:40.242106   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:08:45.244304   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:08:45.244546   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:08:45.273700   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:08:45.273827   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:08:45.291182   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:08:45.291263   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:08:45.305127   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:08:45.305205   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:08:45.323236   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:08:45.323308   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:08:45.334382   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:08:45.334450   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:08:45.345033   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:08:45.345106   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:08:45.355596   21713 logs.go:276] 0 containers: []
	W0318 05:08:45.355616   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:08:45.355675   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:08:45.366029   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:08:45.366045   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:08:45.366051   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:08:45.390326   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:08:45.390335   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:08:45.401727   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:08:45.401738   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:08:45.437048   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:45.437142   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:45.439210   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:08:45.439214   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:08:45.454091   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:08:45.454104   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:08:45.466782   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:08:45.466793   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:08:45.482082   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:08:45.482093   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:08:45.498838   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:08:45.498847   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:08:45.510210   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:08:45.510221   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:08:45.514752   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:08:45.514759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:08:45.551110   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:08:45.551121   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:08:45.565945   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:08:45.565956   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:08:45.579792   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:08:45.579803   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:08:45.595707   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:45.595717   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:08:45.595743   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:08:45.595748   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:08:45.595753   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:08:45.595759   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:08:45.595762   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:08:55.599578   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:00.601782   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:00.601973   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:00.627328   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:00.627439   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:00.642545   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:00.642620   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:00.654662   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:00.654743   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:00.665349   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:00.665425   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:00.675253   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:00.675334   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:00.685887   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:00.685960   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:00.695958   21713 logs.go:276] 0 containers: []
	W0318 05:09:00.695970   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:00.696031   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:00.706689   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:00.706709   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:00.706715   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:00.711452   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:00.711462   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:00.722788   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:00.722797   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:00.734026   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:00.734037   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:00.745703   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:00.745715   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:00.762757   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:00.762766   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:00.774626   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:00.774638   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:00.799808   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:00.799820   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:00.811573   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:00.811583   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:00.847904   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:00.847999   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:00.850002   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:00.850007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:00.885096   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:00.885107   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:00.899847   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:00.899858   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:00.913658   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:00.913668   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:00.927496   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:00.927509   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:00.927533   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:09:00.927541   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:00.927545   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:00.927550   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:00.927553   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:10.931069   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:15.933250   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:15.933408   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:15.953516   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:15.953601   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:15.966065   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:15.966139   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:15.976983   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:15.977052   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:15.986945   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:15.987007   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:15.998041   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:15.998117   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:16.009670   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:16.009738   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:16.025192   21713 logs.go:276] 0 containers: []
	W0318 05:09:16.025208   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:16.025268   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:16.035536   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:16.035552   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:16.035558   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:16.072240   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:16.072336   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:16.074352   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:16.074358   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:16.089532   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:16.089542   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:16.114184   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:16.114193   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:16.128634   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:16.128649   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:16.140227   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:16.140238   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:16.157488   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:16.157498   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:16.161914   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:16.161920   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:16.198373   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:16.198384   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:16.215360   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:16.215369   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:16.229305   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:16.229321   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:16.240760   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:16.240772   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:16.252397   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:16.252409   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:16.264152   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:16.264167   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:16.264197   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:09:16.264201   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:16.264205   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:16.264209   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:16.264212   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:26.268064   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:31.270242   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:31.270346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:31.283179   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:31.283252   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:31.294119   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:31.294182   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:31.305477   21713 logs.go:276] 2 containers: [61e158044db5 56b5ade2e09c]
	I0318 05:09:31.305545   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:31.315934   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:31.316008   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:31.326324   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:31.326391   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:31.336478   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:31.336542   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:31.346540   21713 logs.go:276] 0 containers: []
	W0318 05:09:31.346550   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:31.346601   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:31.357812   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:31.357829   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:31.357834   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:31.374614   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:31.374624   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:31.389412   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:31.389428   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:31.405927   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:31.405938   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:31.418162   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:31.418175   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:31.429814   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:31.429824   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:31.447785   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:31.447796   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:31.482257   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:31.482352   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:31.484441   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:31.484451   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:31.488229   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:31.488236   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:31.528283   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:31.528295   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:31.544170   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:31.544180   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:31.560845   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:31.560855   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:31.584276   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:31.584285   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:31.600505   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:31.600515   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:31.600542   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:09:31.600547   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:31.600550   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:31.600555   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:31.600558   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:41.603094   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:09:46.605220   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:09:46.605386   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:09:46.632813   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:09:46.632907   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:09:46.649666   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:09:46.649751   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:09:46.663435   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:09:46.663511   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:09:46.678122   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:09:46.678196   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:09:46.688532   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:09:46.688607   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:09:46.700795   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:09:46.700865   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:09:46.712050   21713 logs.go:276] 0 containers: []
	W0318 05:09:46.712062   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:09:46.712123   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:09:46.722977   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:09:46.722996   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:09:46.723001   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:09:46.759148   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:46.759244   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:46.761258   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:09:46.761262   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:09:46.765402   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:09:46.765411   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:09:46.783025   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:09:46.783037   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:09:46.807098   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:09:46.807106   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:09:46.818629   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:09:46.818642   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:09:46.830166   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:09:46.830176   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:09:46.864236   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:09:46.864246   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:09:46.878273   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:09:46.878284   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:09:46.890115   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:09:46.890126   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:09:46.901919   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:09:46.901931   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:09:46.913974   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:09:46.913986   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:09:46.929091   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:09:46.929102   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:09:46.940445   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:09:46.940456   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:09:46.958782   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:09:46.958793   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:09:46.970343   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:46.970354   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:09:46.970380   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:09:46.970386   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:09:46.970391   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:09:46.970394   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:09:46.970398   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:09:56.974194   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:01.976265   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:01.976346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:01.986966   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:01.987042   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:01.998650   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:01.998717   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:02.009081   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:02.009157   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:02.020418   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:02.020490   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:02.030533   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:02.030622   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:02.041313   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:02.041374   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:02.051642   21713 logs.go:276] 0 containers: []
	W0318 05:10:02.051653   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:02.051709   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:02.062387   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:02.062408   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:02.062414   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:02.074113   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:02.074124   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:02.110503   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:02.110597   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:02.112711   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:02.112717   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:02.124098   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:02.124109   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:02.135448   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:02.135457   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:02.149537   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:02.149547   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:02.161938   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:02.161949   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:02.176381   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:02.176391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:02.187939   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:02.187948   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:02.192316   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:02.192322   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:02.228899   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:02.228909   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:02.246662   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:02.246672   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:02.271529   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:02.271537   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:02.284598   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:02.284612   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:02.299368   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:02.299379   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:02.314977   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:02.314989   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:02.315018   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:10:02.315022   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:02.315026   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:02.315030   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:02.315033   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:12.317924   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:17.320113   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:17.320355   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:17.347657   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:17.347775   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:17.366444   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:17.366530   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:17.380247   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:17.380325   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:17.391739   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:17.391810   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:17.406104   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:17.406177   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:17.418306   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:17.418374   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:17.428486   21713 logs.go:276] 0 containers: []
	W0318 05:10:17.428498   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:17.428561   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:17.438549   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:17.438567   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:17.438574   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:17.469575   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:17.469588   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:17.491006   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:17.491018   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:17.495381   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:17.495391   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:17.509799   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:17.509810   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:17.521593   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:17.521602   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:17.533071   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:17.533081   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:17.555976   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:17.555988   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:17.581165   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:17.581175   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:17.618076   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:17.618086   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:17.632032   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:17.632043   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:17.646392   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:17.646401   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:17.657949   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:17.657960   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:17.670100   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:17.670111   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:17.704733   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:17.704827   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:17.706937   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:17.706942   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:17.718497   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:17.718507   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:17.718537   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:10:17.718543   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:17.718553   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:17.718559   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:17.718564   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:27.722385   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:32.723225   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:32.723351   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:32.737560   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:32.737629   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:32.749058   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:32.749131   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:32.759500   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:32.759576   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:32.771997   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:32.772075   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:32.782719   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:32.782786   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:32.792795   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:32.792883   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:32.803592   21713 logs.go:276] 0 containers: []
	W0318 05:10:32.803604   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:32.803662   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:32.818494   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:32.818511   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:32.818516   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:32.830662   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:32.830673   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:32.834960   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:32.834968   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:32.850057   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:32.850081   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:32.861757   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:32.861767   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:32.873525   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:32.873534   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:32.897388   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:32.897395   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:32.908866   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:32.908877   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:32.945259   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:32.945353   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:32.947355   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:32.947359   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:32.958578   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:32.958589   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:32.973280   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:32.973292   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:32.990586   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:32.990597   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:33.030224   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:33.030234   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:33.042820   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:33.042832   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:33.064034   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:33.064045   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:33.078493   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:33.078503   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:33.078532   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:10:33.078536   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:33.078539   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:33.078543   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:33.078547   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:43.082363   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:10:48.084434   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:10:48.084638   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:10:48.102594   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:10:48.102678   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:10:48.118812   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:10:48.118888   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:10:48.130036   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:10:48.130113   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:10:48.140750   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:10:48.140821   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:10:48.155611   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:10:48.155676   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:10:48.166329   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:10:48.166398   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:10:48.176047   21713 logs.go:276] 0 containers: []
	W0318 05:10:48.176060   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:10:48.176120   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:10:48.186490   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:10:48.186509   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:10:48.186515   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:10:48.190799   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:10:48.190808   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:10:48.217430   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:10:48.217440   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:10:48.229494   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:10:48.229503   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:10:48.253666   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:10:48.253674   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:10:48.288513   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:10:48.288523   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:10:48.300669   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:10:48.300679   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:10:48.312390   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:10:48.312400   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:10:48.330724   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:10:48.330735   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:10:48.367266   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:48.367364   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:48.369507   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:10:48.369512   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:10:48.383667   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:10:48.383677   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:10:48.397640   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:10:48.397651   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:10:48.409484   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:10:48.409500   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:10:48.421101   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:10:48.421111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:10:48.432746   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:10:48.432759   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:10:48.444460   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:48.444471   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:10:48.444501   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:10:48.444506   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:10:48.444509   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:10:48.444513   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:10:48.444517   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:10:58.446255   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:03.446088   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:03.446324   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:03.475812   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:11:03.475934   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:03.497296   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:11:03.497377   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:03.512265   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:11:03.512346   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:03.523662   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:11:03.523729   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:03.533910   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:11:03.533972   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:03.544890   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:11:03.544955   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:03.554721   21713 logs.go:276] 0 containers: []
	W0318 05:11:03.554732   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:03.554782   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:03.565431   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:11:03.565450   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:03.565456   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:11:03.600602   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:03.600697   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:03.602761   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:11:03.602765   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:11:03.623312   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:11:03.623324   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:11:03.637321   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:11:03.637332   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:11:03.648917   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:11:03.648928   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:11:03.660894   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:11:03.660905   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:03.672548   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:03.672559   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:03.713560   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:11:03.713572   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:11:03.732179   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:11:03.732190   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:11:03.755640   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:03.755653   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:03.780951   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:03.780966   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:03.785849   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:11:03.785858   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:11:03.798046   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:11:03.798058   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:11:03.810785   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:11:03.810796   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:11:03.822923   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:11:03.822935   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:11:03.834606   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:03.834616   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:11:03.834641   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:11:03.834646   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:03.834650   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:03.834655   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:03.834658   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:11:13.837379   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:18.837112   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:18.837261   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 05:11:18.848763   21713 logs.go:276] 1 containers: [6715e07ea390]
	I0318 05:11:18.848841   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 05:11:18.874237   21713 logs.go:276] 1 containers: [804ac8b3253c]
	I0318 05:11:18.874312   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 05:11:18.898056   21713 logs.go:276] 4 containers: [b50d97f3a440 7c8628f6727d 61e158044db5 56b5ade2e09c]
	I0318 05:11:18.898134   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 05:11:18.911464   21713 logs.go:276] 1 containers: [87fd2f1c3051]
	I0318 05:11:18.911544   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 05:11:18.922091   21713 logs.go:276] 1 containers: [d6808482ca1a]
	I0318 05:11:18.922165   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 05:11:18.932673   21713 logs.go:276] 1 containers: [9c8ffc5e895a]
	I0318 05:11:18.932744   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 05:11:18.942674   21713 logs.go:276] 0 containers: []
	W0318 05:11:18.942687   21713 logs.go:278] No container was found matching "kindnet"
	I0318 05:11:18.942741   21713 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 05:11:18.952814   21713 logs.go:276] 1 containers: [ddf8605ef0f7]
	I0318 05:11:18.952829   21713 logs.go:123] Gathering logs for coredns [56b5ade2e09c] ...
	I0318 05:11:18.952834   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56b5ade2e09c"
	I0318 05:11:18.964529   21713 logs.go:123] Gathering logs for kube-scheduler [87fd2f1c3051] ...
	I0318 05:11:18.964540   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fd2f1c3051"
	I0318 05:11:18.979057   21713 logs.go:123] Gathering logs for kube-proxy [d6808482ca1a] ...
	I0318 05:11:18.979074   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6808482ca1a"
	I0318 05:11:18.990996   21713 logs.go:123] Gathering logs for describe nodes ...
	I0318 05:11:18.991007   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 05:11:19.025164   21713 logs.go:123] Gathering logs for etcd [804ac8b3253c] ...
	I0318 05:11:19.025175   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 804ac8b3253c"
	I0318 05:11:19.039713   21713 logs.go:123] Gathering logs for coredns [b50d97f3a440] ...
	I0318 05:11:19.039724   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b50d97f3a440"
	I0318 05:11:19.051703   21713 logs.go:123] Gathering logs for coredns [7c8628f6727d] ...
	I0318 05:11:19.051714   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8628f6727d"
	I0318 05:11:19.063337   21713 logs.go:123] Gathering logs for coredns [61e158044db5] ...
	I0318 05:11:19.063349   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61e158044db5"
	I0318 05:11:19.075100   21713 logs.go:123] Gathering logs for kube-apiserver [6715e07ea390] ...
	I0318 05:11:19.075111   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6715e07ea390"
	I0318 05:11:19.089499   21713 logs.go:123] Gathering logs for Docker ...
	I0318 05:11:19.089510   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 05:11:19.114183   21713 logs.go:123] Gathering logs for kubelet ...
	I0318 05:11:19.114191   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 05:11:19.150051   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:19.150143   21713 logs.go:138] Found kubelet problem: Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:19.152194   21713 logs.go:123] Gathering logs for kube-controller-manager [9c8ffc5e895a] ...
	I0318 05:11:19.152199   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8ffc5e895a"
	I0318 05:11:19.169613   21713 logs.go:123] Gathering logs for storage-provisioner [ddf8605ef0f7] ...
	I0318 05:11:19.169624   21713 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf8605ef0f7"
	I0318 05:11:19.181159   21713 logs.go:123] Gathering logs for container status ...
	I0318 05:11:19.181173   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 05:11:19.193367   21713 logs.go:123] Gathering logs for dmesg ...
	I0318 05:11:19.193377   21713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 05:11:19.197905   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:19.197916   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 05:11:19.197938   21713 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 05:11:19.197943   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: W0318 12:07:42.233012   10407 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0318 05:11:19.197951   21713 out.go:239]   Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Mar 18 12:07:42 stopped-upgrade-211000 kubelet[10407]: E0318 12:07:42.233046   10407 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0318 05:11:19.197955   21713 out.go:304] Setting ErrFile to fd 2...
	I0318 05:11:19.197957   21713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:11:29.201353   21713 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 05:11:34.203395   21713 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 05:11:34.207695   21713 out.go:177] 
	W0318 05:11:34.211772   21713 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 05:11:34.211779   21713 out.go:239] * 
	* 
	W0318 05:11:34.212285   21713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:11:34.222605   21713 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (619.11s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-666000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-666000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.946725208s)

                                                
                                                
-- stdout --
	* [pause-666000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-666000" primary control-plane node in "pause-666000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-666000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-666000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-666000 -n pause-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-666000 -n pause-666000: exit status 7 (52.241125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 : exit status 80 (9.895401292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-277000" primary control-plane node in "NoKubernetes-277000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-277000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000: exit status 7 (68.17325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 : exit status 80 (7.432225375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-277000
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-277000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000: exit status 7 (33.564375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18427
- KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3520731013/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18427
- KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3875821177/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 : exit status 80 (5.89698025s)

                                                
                                                
-- stdout --
	* [NoKubernetes-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-277000
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-277000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000: exit status 7 (74.023791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 : exit status 80 (7.228421625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-277000
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-277000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-277000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-277000 -n NoKubernetes-277000: exit status 7 (54.377541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.904370917s)

                                                
                                                
-- stdout --
	* [auto-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-970000" primary control-plane node in "auto-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:13:17.648471   22173 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:13:17.648625   22173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:17.648629   22173 out.go:304] Setting ErrFile to fd 2...
	I0318 05:13:17.648631   22173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:17.648763   22173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:13:17.649884   22173 out.go:298] Setting JSON to false
	I0318 05:13:17.665859   22173 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11570,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:13:17.665934   22173 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:13:17.672987   22173 out.go:177] * [auto-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:13:17.680946   22173 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:13:17.680995   22173 notify.go:220] Checking for updates...
	I0318 05:13:17.683954   22173 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:13:17.686938   22173 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:13:17.689931   22173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:13:17.692913   22173 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:13:17.695945   22173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:13:17.699390   22173 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:17.699458   22173 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:17.699501   22173 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:13:17.703878   22173 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:13:17.710961   22173 start.go:297] selected driver: qemu2
	I0318 05:13:17.710968   22173 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:13:17.710974   22173 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:13:17.713242   22173 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:13:17.716840   22173 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:13:17.719991   22173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:13:17.720040   22173 cni.go:84] Creating CNI manager for ""
	I0318 05:13:17.720048   22173 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:13:17.720052   22173 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:13:17.720085   22173 start.go:340] cluster config:
	{Name:auto-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:13:17.724525   22173 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:13:17.729927   22173 out.go:177] * Starting "auto-970000" primary control-plane node in "auto-970000" cluster
	I0318 05:13:17.733927   22173 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:13:17.733942   22173 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:13:17.733958   22173 cache.go:56] Caching tarball of preloaded images
	I0318 05:13:17.734022   22173 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:13:17.734043   22173 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:13:17.734103   22173 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/auto-970000/config.json ...
	I0318 05:13:17.734115   22173 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/auto-970000/config.json: {Name:mk361a72c3b10716bd69beed0271f3613e947e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:13:17.734351   22173 start.go:360] acquireMachinesLock for auto-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:17.734384   22173 start.go:364] duration metric: took 26.666µs to acquireMachinesLock for "auto-970000"
	I0318 05:13:17.734396   22173 start.go:93] Provisioning new machine with config: &{Name:auto-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:17.734425   22173 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:17.742909   22173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:17.760777   22173 start.go:159] libmachine.API.Create for "auto-970000" (driver="qemu2")
	I0318 05:13:17.760803   22173 client.go:168] LocalClient.Create starting
	I0318 05:13:17.760860   22173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:17.760888   22173 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:17.760897   22173 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:17.760942   22173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:17.760964   22173 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:17.760970   22173 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:17.761331   22173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:17.901645   22173 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:18.038960   22173 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:18.038968   22173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:18.039161   22173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:18.051457   22173 main.go:141] libmachine: STDOUT: 
	I0318 05:13:18.051483   22173 main.go:141] libmachine: STDERR: 
	I0318 05:13:18.051546   22173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2 +20000M
	I0318 05:13:18.062122   22173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:18.062145   22173 main.go:141] libmachine: STDERR: 
	I0318 05:13:18.062168   22173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:18.062175   22173 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:18.062209   22173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:6b:eb:49:8f:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:18.064010   22173 main.go:141] libmachine: STDOUT: 
	I0318 05:13:18.064032   22173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:18.064053   22173 client.go:171] duration metric: took 303.255959ms to LocalClient.Create
	I0318 05:13:20.064530   22173 start.go:128] duration metric: took 2.330159083s to createHost
	I0318 05:13:20.064618   22173 start.go:83] releasing machines lock for "auto-970000", held for 2.330301667s
	W0318 05:13:20.064683   22173 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:20.075266   22173 out.go:177] * Deleting "auto-970000" in qemu2 ...
	W0318 05:13:20.105252   22173 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:20.105285   22173 start.go:728] Will try again in 5 seconds ...
	I0318 05:13:25.107280   22173 start.go:360] acquireMachinesLock for auto-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:25.107652   22173 start.go:364] duration metric: took 309.5µs to acquireMachinesLock for "auto-970000"
	I0318 05:13:25.107758   22173 start.go:93] Provisioning new machine with config: &{Name:auto-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:25.108048   22173 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:25.116718   22173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:25.166048   22173 start.go:159] libmachine.API.Create for "auto-970000" (driver="qemu2")
	I0318 05:13:25.166098   22173 client.go:168] LocalClient.Create starting
	I0318 05:13:25.166244   22173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:25.166306   22173 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:25.166324   22173 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:25.166392   22173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:25.166435   22173 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:25.166447   22173 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:25.166953   22173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:25.317735   22173 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:25.447349   22173 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:25.447359   22173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:25.447530   22173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:25.460221   22173 main.go:141] libmachine: STDOUT: 
	I0318 05:13:25.460242   22173 main.go:141] libmachine: STDERR: 
	I0318 05:13:25.460306   22173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2 +20000M
	I0318 05:13:25.471240   22173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:25.471273   22173 main.go:141] libmachine: STDERR: 
	I0318 05:13:25.471289   22173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:25.471296   22173 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:25.471333   22173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:21:e1:92:4b:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/auto-970000/disk.qcow2
	I0318 05:13:25.473183   22173 main.go:141] libmachine: STDOUT: 
	I0318 05:13:25.473199   22173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:25.473211   22173 client.go:171] duration metric: took 307.118041ms to LocalClient.Create
	I0318 05:13:27.475416   22173 start.go:128] duration metric: took 2.367393625s to createHost
	I0318 05:13:27.475490   22173 start.go:83] releasing machines lock for "auto-970000", held for 2.36788275s
	W0318 05:13:27.475937   22173 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:27.486517   22173 out.go:177] 
	W0318 05:13:27.494740   22173 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:13:27.494765   22173 out.go:239] * 
	* 
	W0318 05:13:27.497178   22173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:13:27.506547   22173 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.813566333s)

                                                
                                                
-- stdout --
	* [kindnet-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-970000" primary control-plane node in "kindnet-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:13:29.858871   22283 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:13:29.858991   22283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:29.858994   22283 out.go:304] Setting ErrFile to fd 2...
	I0318 05:13:29.858997   22283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:29.859128   22283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:13:29.860169   22283 out.go:298] Setting JSON to false
	I0318 05:13:29.876237   22283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11582,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:13:29.876296   22283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:13:29.882751   22283 out.go:177] * [kindnet-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:13:29.891746   22283 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:13:29.895841   22283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:13:29.891786   22283 notify.go:220] Checking for updates...
	I0318 05:13:29.902717   22283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:13:29.905744   22283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:13:29.908749   22283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:13:29.911739   22283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:13:29.915101   22283 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:29.915178   22283 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:29.915230   22283 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:13:29.919754   22283 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:13:29.926729   22283 start.go:297] selected driver: qemu2
	I0318 05:13:29.926736   22283 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:13:29.926743   22283 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:13:29.929031   22283 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:13:29.933763   22283 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:13:29.936822   22283 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:13:29.936862   22283 cni.go:84] Creating CNI manager for "kindnet"
	I0318 05:13:29.936867   22283 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 05:13:29.936904   22283 start.go:340] cluster config:
	{Name:kindnet-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:13:29.941602   22283 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:13:29.948716   22283 out.go:177] * Starting "kindnet-970000" primary control-plane node in "kindnet-970000" cluster
	I0318 05:13:29.952724   22283 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:13:29.952738   22283 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:13:29.952744   22283 cache.go:56] Caching tarball of preloaded images
	I0318 05:13:29.952795   22283 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:13:29.952801   22283 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:13:29.952857   22283 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kindnet-970000/config.json ...
	I0318 05:13:29.952868   22283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kindnet-970000/config.json: {Name:mkeff81dc873f92d3b2fb8569be9caf26892eaa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:13:29.953083   22283 start.go:360] acquireMachinesLock for kindnet-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:29.953116   22283 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "kindnet-970000"
	I0318 05:13:29.953129   22283 start.go:93] Provisioning new machine with config: &{Name:kindnet-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:29.953160   22283 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:29.961754   22283 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:29.979329   22283 start.go:159] libmachine.API.Create for "kindnet-970000" (driver="qemu2")
	I0318 05:13:29.979363   22283 client.go:168] LocalClient.Create starting
	I0318 05:13:29.979425   22283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:29.979453   22283 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:29.979463   22283 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:29.979514   22283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:29.979536   22283 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:29.979544   22283 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:29.979904   22283 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:30.119754   22283 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:30.204162   22283 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:30.204167   22283 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:30.204342   22283 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:30.217077   22283 main.go:141] libmachine: STDOUT: 
	I0318 05:13:30.217097   22283 main.go:141] libmachine: STDERR: 
	I0318 05:13:30.217159   22283 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2 +20000M
	I0318 05:13:30.228076   22283 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:30.228094   22283 main.go:141] libmachine: STDERR: 
	I0318 05:13:30.228113   22283 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:30.228119   22283 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:30.228150   22283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:86:0e:d3:1b:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:30.229853   22283 main.go:141] libmachine: STDOUT: 
	I0318 05:13:30.229869   22283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:30.229886   22283 client.go:171] duration metric: took 250.525042ms to LocalClient.Create
	I0318 05:13:32.230449   22283 start.go:128] duration metric: took 2.277336083s to createHost
	I0318 05:13:32.230525   22283 start.go:83] releasing machines lock for "kindnet-970000", held for 2.277476459s
	W0318 05:13:32.230631   22283 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:32.236758   22283 out.go:177] * Deleting "kindnet-970000" in qemu2 ...
	W0318 05:13:32.268895   22283 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:32.268935   22283 start.go:728] Will try again in 5 seconds ...
	I0318 05:13:37.270966   22283 start.go:360] acquireMachinesLock for kindnet-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:37.271402   22283 start.go:364] duration metric: took 349.083µs to acquireMachinesLock for "kindnet-970000"
	I0318 05:13:37.271531   22283 start.go:93] Provisioning new machine with config: &{Name:kindnet-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:37.271750   22283 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:37.289522   22283 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:37.338777   22283 start.go:159] libmachine.API.Create for "kindnet-970000" (driver="qemu2")
	I0318 05:13:37.338829   22283 client.go:168] LocalClient.Create starting
	I0318 05:13:37.338925   22283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:37.338985   22283 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:37.339002   22283 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:37.339063   22283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:37.339103   22283 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:37.339113   22283 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:37.339629   22283 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:37.491847   22283 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:37.570820   22283 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:37.570825   22283 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:37.571017   22283 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:37.583199   22283 main.go:141] libmachine: STDOUT: 
	I0318 05:13:37.583221   22283 main.go:141] libmachine: STDERR: 
	I0318 05:13:37.583285   22283 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2 +20000M
	I0318 05:13:37.593842   22283 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:37.593863   22283 main.go:141] libmachine: STDERR: 
	I0318 05:13:37.593874   22283 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:37.593879   22283 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:37.593917   22283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:3e:de:7f:fd:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kindnet-970000/disk.qcow2
	I0318 05:13:37.595651   22283 main.go:141] libmachine: STDOUT: 
	I0318 05:13:37.595669   22283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:37.595682   22283 client.go:171] duration metric: took 256.857541ms to LocalClient.Create
	I0318 05:13:39.597787   22283 start.go:128] duration metric: took 2.326085708s to createHost
	I0318 05:13:39.597847   22283 start.go:83] releasing machines lock for "kindnet-970000", held for 2.326487459s
	W0318 05:13:39.598206   22283 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:39.607725   22283 out.go:177] 
	W0318 05:13:39.613874   22283 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:13:39.613904   22283 out.go:239] * 
	* 
	W0318 05:13:39.617091   22283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:13:39.626786   22283 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.862203792s)

                                                
                                                
-- stdout --
	* [calico-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-970000" primary control-plane node in "calico-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:13:42.048408   22403 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:13:42.048546   22403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:42.048553   22403 out.go:304] Setting ErrFile to fd 2...
	I0318 05:13:42.048555   22403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:42.048705   22403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:13:42.049791   22403 out.go:298] Setting JSON to false
	I0318 05:13:42.065771   22403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11595,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:13:42.065834   22403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:13:42.070196   22403 out.go:177] * [calico-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:13:42.077964   22403 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:13:42.078014   22403 notify.go:220] Checking for updates...
	I0318 05:13:42.085786   22403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:13:42.091963   22403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:13:42.095868   22403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:13:42.098987   22403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:13:42.102015   22403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:13:42.106176   22403 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:42.106253   22403 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:42.106307   22403 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:13:42.110920   22403 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:13:42.117818   22403 start.go:297] selected driver: qemu2
	I0318 05:13:42.117824   22403 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:13:42.117830   22403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:13:42.120117   22403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:13:42.123008   22403 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:13:42.126111   22403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:13:42.126153   22403 cni.go:84] Creating CNI manager for "calico"
	I0318 05:13:42.126157   22403 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0318 05:13:42.126191   22403 start.go:340] cluster config:
	{Name:calico-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:13:42.131081   22403 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:13:42.138969   22403 out.go:177] * Starting "calico-970000" primary control-plane node in "calico-970000" cluster
	I0318 05:13:42.142995   22403 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:13:42.143011   22403 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:13:42.143022   22403 cache.go:56] Caching tarball of preloaded images
	I0318 05:13:42.143090   22403 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:13:42.143097   22403 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:13:42.143163   22403 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/calico-970000/config.json ...
	I0318 05:13:42.143176   22403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/calico-970000/config.json: {Name:mk34c05cd000c8725fc59b3b5fcf6b62794b628e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:13:42.143417   22403 start.go:360] acquireMachinesLock for calico-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:42.143452   22403 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "calico-970000"
	I0318 05:13:42.143468   22403 start.go:93] Provisioning new machine with config: &{Name:calico-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:42.143507   22403 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:42.152031   22403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:42.170897   22403 start.go:159] libmachine.API.Create for "calico-970000" (driver="qemu2")
	I0318 05:13:42.170936   22403 client.go:168] LocalClient.Create starting
	I0318 05:13:42.171011   22403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:42.171042   22403 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:42.171056   22403 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:42.171106   22403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:42.171130   22403 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:42.171137   22403 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:42.171558   22403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:42.311686   22403 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:42.435711   22403 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:42.435718   22403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:42.435913   22403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:42.448341   22403 main.go:141] libmachine: STDOUT: 
	I0318 05:13:42.448360   22403 main.go:141] libmachine: STDERR: 
	I0318 05:13:42.448413   22403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2 +20000M
	I0318 05:13:42.459331   22403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:42.459346   22403 main.go:141] libmachine: STDERR: 
	I0318 05:13:42.459366   22403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:42.459370   22403 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:42.459409   22403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:2f:78:5b:bd:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:42.461124   22403 main.go:141] libmachine: STDOUT: 
	I0318 05:13:42.461137   22403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:42.461154   22403 client.go:171] duration metric: took 290.223ms to LocalClient.Create
	I0318 05:13:44.463294   22403 start.go:128] duration metric: took 2.3198355s to createHost
	I0318 05:13:44.463415   22403 start.go:83] releasing machines lock for "calico-970000", held for 2.320028459s
	W0318 05:13:44.463479   22403 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:44.475568   22403 out.go:177] * Deleting "calico-970000" in qemu2 ...
	W0318 05:13:44.503739   22403 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:44.503786   22403 start.go:728] Will try again in 5 seconds ...
	I0318 05:13:49.505823   22403 start.go:360] acquireMachinesLock for calico-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:49.506203   22403 start.go:364] duration metric: took 298.875µs to acquireMachinesLock for "calico-970000"
	I0318 05:13:49.506346   22403 start.go:93] Provisioning new machine with config: &{Name:calico-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:49.506695   22403 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:49.517332   22403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:49.568064   22403 start.go:159] libmachine.API.Create for "calico-970000" (driver="qemu2")
	I0318 05:13:49.568113   22403 client.go:168] LocalClient.Create starting
	I0318 05:13:49.568225   22403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:49.568311   22403 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:49.568332   22403 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:49.568388   22403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:49.568430   22403 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:49.568448   22403 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:49.568981   22403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:49.723116   22403 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:49.806445   22403 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:49.806450   22403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:49.806647   22403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:49.818804   22403 main.go:141] libmachine: STDOUT: 
	I0318 05:13:49.818827   22403 main.go:141] libmachine: STDERR: 
	I0318 05:13:49.818878   22403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2 +20000M
	I0318 05:13:49.829555   22403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:49.829571   22403 main.go:141] libmachine: STDERR: 
	I0318 05:13:49.829583   22403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:49.829590   22403 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:49.829627   22403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9e:ca:51:ca:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/calico-970000/disk.qcow2
	I0318 05:13:49.831352   22403 main.go:141] libmachine: STDOUT: 
	I0318 05:13:49.831367   22403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:49.831379   22403 client.go:171] duration metric: took 263.268958ms to LocalClient.Create
	I0318 05:13:51.833485   22403 start.go:128] duration metric: took 2.326833625s to createHost
	I0318 05:13:51.833548   22403 start.go:83] releasing machines lock for "calico-970000", held for 2.327399458s
	W0318 05:13:51.833876   22403 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:51.843608   22403 out.go:177] 
	W0318 05:13:51.851843   22403 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:13:51.851901   22403 out.go:239] * 
	* 
	W0318 05:13:51.854557   22403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:13:51.864495   22403 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.915826375s)

                                                
                                                
-- stdout --
	* [custom-flannel-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-970000" primary control-plane node in "custom-flannel-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:13:54.402076   22524 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:13:54.402207   22524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:54.402211   22524 out.go:304] Setting ErrFile to fd 2...
	I0318 05:13:54.402213   22524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:13:54.402345   22524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:13:54.403413   22524 out.go:298] Setting JSON to false
	I0318 05:13:54.419506   22524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11607,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:13:54.419573   22524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:13:54.424598   22524 out.go:177] * [custom-flannel-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:13:54.432429   22524 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:13:54.432487   22524 notify.go:220] Checking for updates...
	I0318 05:13:54.437508   22524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:13:54.441445   22524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:13:54.444443   22524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:13:54.448486   22524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:13:54.451354   22524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:13:54.454840   22524 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:54.454911   22524 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:13:54.454967   22524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:13:54.459375   22524 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:13:54.466420   22524 start.go:297] selected driver: qemu2
	I0318 05:13:54.466425   22524 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:13:54.466432   22524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:13:54.468725   22524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:13:54.472490   22524 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:13:54.475520   22524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:13:54.475577   22524 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0318 05:13:54.475586   22524 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0318 05:13:54.475621   22524 start.go:340] cluster config:
	{Name:custom-flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:13:54.480333   22524 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:13:54.487499   22524 out.go:177] * Starting "custom-flannel-970000" primary control-plane node in "custom-flannel-970000" cluster
	I0318 05:13:54.491435   22524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:13:54.491451   22524 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:13:54.491467   22524 cache.go:56] Caching tarball of preloaded images
	I0318 05:13:54.491537   22524 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:13:54.491543   22524 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:13:54.491617   22524 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/custom-flannel-970000/config.json ...
	I0318 05:13:54.491632   22524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/custom-flannel-970000/config.json: {Name:mk40026e7a5a8e14a09878ad7413f284deaea429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:13:54.491848   22524 start.go:360] acquireMachinesLock for custom-flannel-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:13:54.491882   22524 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "custom-flannel-970000"
	I0318 05:13:54.491894   22524 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:13:54.491921   22524 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:13:54.500367   22524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:13:54.517703   22524 start.go:159] libmachine.API.Create for "custom-flannel-970000" (driver="qemu2")
	I0318 05:13:54.517737   22524 client.go:168] LocalClient.Create starting
	I0318 05:13:54.517806   22524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:13:54.517835   22524 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:54.517845   22524 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:54.517892   22524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:13:54.517916   22524 main.go:141] libmachine: Decoding PEM data...
	I0318 05:13:54.517923   22524 main.go:141] libmachine: Parsing certificate...
	I0318 05:13:54.518297   22524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:13:54.659964   22524 main.go:141] libmachine: Creating SSH key...
	I0318 05:13:54.785278   22524 main.go:141] libmachine: Creating Disk image...
	I0318 05:13:54.785285   22524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:13:54.785466   22524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:13:54.797938   22524 main.go:141] libmachine: STDOUT: 
	I0318 05:13:54.797959   22524 main.go:141] libmachine: STDERR: 
	I0318 05:13:54.798023   22524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2 +20000M
	I0318 05:13:54.808800   22524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:13:54.808819   22524 main.go:141] libmachine: STDERR: 
	I0318 05:13:54.808843   22524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:13:54.808848   22524 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:13:54.808876   22524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e5:a7:38:30:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:13:54.810672   22524 main.go:141] libmachine: STDOUT: 
	I0318 05:13:54.810689   22524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:13:54.810714   22524 client.go:171] duration metric: took 292.981542ms to LocalClient.Create
	I0318 05:13:56.811030   22524 start.go:128] duration metric: took 2.319151083s to createHost
	I0318 05:13:56.811114   22524 start.go:83] releasing machines lock for "custom-flannel-970000", held for 2.319300417s
	W0318 05:13:56.811187   22524 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:56.821411   22524 out.go:177] * Deleting "custom-flannel-970000" in qemu2 ...
	W0318 05:13:56.852820   22524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:13:56.852862   22524 start.go:728] Will try again in 5 seconds ...
	I0318 05:14:01.854956   22524 start.go:360] acquireMachinesLock for custom-flannel-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:01.855378   22524 start.go:364] duration metric: took 321.458µs to acquireMachinesLock for "custom-flannel-970000"
	I0318 05:14:01.855511   22524 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:01.855827   22524 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:01.865467   22524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:01.914402   22524 start.go:159] libmachine.API.Create for "custom-flannel-970000" (driver="qemu2")
	I0318 05:14:01.914444   22524 client.go:168] LocalClient.Create starting
	I0318 05:14:01.914565   22524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:01.914638   22524 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:01.914656   22524 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:01.914729   22524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:01.914769   22524 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:01.914781   22524 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:01.915301   22524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:02.066638   22524 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:02.212665   22524 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:02.212675   22524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:02.212877   22524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:14:02.225416   22524 main.go:141] libmachine: STDOUT: 
	I0318 05:14:02.225437   22524 main.go:141] libmachine: STDERR: 
	I0318 05:14:02.225498   22524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2 +20000M
	I0318 05:14:02.236029   22524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:02.236044   22524 main.go:141] libmachine: STDERR: 
	I0318 05:14:02.236064   22524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:14:02.236072   22524 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:02.236124   22524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:c6:92:e8:de:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/custom-flannel-970000/disk.qcow2
	I0318 05:14:02.237864   22524 main.go:141] libmachine: STDOUT: 
	I0318 05:14:02.237886   22524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:02.237905   22524 client.go:171] duration metric: took 323.463916ms to LocalClient.Create
	I0318 05:14:04.240007   22524 start.go:128] duration metric: took 2.384231458s to createHost
	I0318 05:14:04.240067   22524 start.go:83] releasing machines lock for "custom-flannel-970000", held for 2.384745583s
	W0318 05:14:04.240477   22524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:04.257306   22524 out.go:177] 
	W0318 05:14:04.261260   22524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:14:04.261286   22524 out.go:239] * 
	* 
	W0318 05:14:04.263639   22524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:14:04.272285   22524 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.848898375s)

                                                
                                                
-- stdout --
	* [false-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-970000" primary control-plane node in "false-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:14:06.791991   22645 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:14:06.792122   22645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:06.792125   22645 out.go:304] Setting ErrFile to fd 2...
	I0318 05:14:06.792127   22645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:06.792262   22645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:14:06.793318   22645 out.go:298] Setting JSON to false
	I0318 05:14:06.809368   22645 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11619,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:14:06.809444   22645 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:14:06.815583   22645 out.go:177] * [false-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:14:06.823574   22645 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:14:06.823667   22645 notify.go:220] Checking for updates...
	I0318 05:14:06.827629   22645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:14:06.831566   22645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:14:06.834586   22645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:14:06.838530   22645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:14:06.841495   22645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:14:06.844945   22645 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:06.845019   22645 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:06.845065   22645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:14:06.848579   22645 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:14:06.855588   22645 start.go:297] selected driver: qemu2
	I0318 05:14:06.855598   22645 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:14:06.855604   22645 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:14:06.857858   22645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:14:06.861593   22645 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:14:06.865659   22645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:14:06.865705   22645 cni.go:84] Creating CNI manager for "false"
	I0318 05:14:06.865741   22645 start.go:340] cluster config:
	{Name:false-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:14:06.870425   22645 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:14:06.878390   22645 out.go:177] * Starting "false-970000" primary control-plane node in "false-970000" cluster
	I0318 05:14:06.882585   22645 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:14:06.882600   22645 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:14:06.882615   22645 cache.go:56] Caching tarball of preloaded images
	I0318 05:14:06.882707   22645 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:14:06.882717   22645 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:14:06.882775   22645 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/false-970000/config.json ...
	I0318 05:14:06.882795   22645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/false-970000/config.json: {Name:mkb1a701481e708e929fc08150d63fc727a95ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:14:06.883020   22645 start.go:360] acquireMachinesLock for false-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:06.883054   22645 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "false-970000"
	I0318 05:14:06.883068   22645 start.go:93] Provisioning new machine with config: &{Name:false-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:06.883119   22645 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:06.886559   22645 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:06.904414   22645 start.go:159] libmachine.API.Create for "false-970000" (driver="qemu2")
	I0318 05:14:06.904444   22645 client.go:168] LocalClient.Create starting
	I0318 05:14:06.904504   22645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:06.904535   22645 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:06.904545   22645 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:06.904587   22645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:06.904610   22645 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:06.904618   22645 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:06.905049   22645 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:07.045417   22645 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:07.169264   22645 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:07.169274   22645 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:07.169454   22645 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:07.181447   22645 main.go:141] libmachine: STDOUT: 
	I0318 05:14:07.181471   22645 main.go:141] libmachine: STDERR: 
	I0318 05:14:07.181516   22645 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2 +20000M
	I0318 05:14:07.192156   22645 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:07.192179   22645 main.go:141] libmachine: STDERR: 
	I0318 05:14:07.192196   22645 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:07.192202   22645 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:07.192242   22645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:7f:07:82:dc:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:07.194039   22645 main.go:141] libmachine: STDOUT: 
	I0318 05:14:07.194056   22645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:07.194073   22645 client.go:171] duration metric: took 289.633208ms to LocalClient.Create
	I0318 05:14:09.194843   22645 start.go:128] duration metric: took 2.311754542s to createHost
	I0318 05:14:09.194946   22645 start.go:83] releasing machines lock for "false-970000", held for 2.3119585s
	W0318 05:14:09.195018   22645 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:09.210343   22645 out.go:177] * Deleting "false-970000" in qemu2 ...
	W0318 05:14:09.234804   22645 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:09.234850   22645 start.go:728] Will try again in 5 seconds ...
	I0318 05:14:14.236946   22645 start.go:360] acquireMachinesLock for false-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:14.237306   22645 start.go:364] duration metric: took 285µs to acquireMachinesLock for "false-970000"
	I0318 05:14:14.237426   22645 start.go:93] Provisioning new machine with config: &{Name:false-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:14.237657   22645 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:14.248172   22645 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:14.297365   22645 start.go:159] libmachine.API.Create for "false-970000" (driver="qemu2")
	I0318 05:14:14.297408   22645 client.go:168] LocalClient.Create starting
	I0318 05:14:14.297520   22645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:14.297579   22645 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:14.297596   22645 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:14.297656   22645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:14.297699   22645 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:14.297710   22645 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:14.298280   22645 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:14.449661   22645 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:14.541268   22645 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:14.541273   22645 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:14.541463   22645 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:14.553908   22645 main.go:141] libmachine: STDOUT: 
	I0318 05:14:14.553930   22645 main.go:141] libmachine: STDERR: 
	I0318 05:14:14.553991   22645 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2 +20000M
	I0318 05:14:14.564791   22645 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:14.564813   22645 main.go:141] libmachine: STDERR: 
	I0318 05:14:14.564827   22645 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:14.564836   22645 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:14.564870   22645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:d4:e5:d1:4d:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/false-970000/disk.qcow2
	I0318 05:14:14.566627   22645 main.go:141] libmachine: STDOUT: 
	I0318 05:14:14.566645   22645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:14.566657   22645 client.go:171] duration metric: took 269.253333ms to LocalClient.Create
	I0318 05:14:16.568760   22645 start.go:128] duration metric: took 2.331150292s to createHost
	I0318 05:14:16.568862   22645 start.go:83] releasing machines lock for "false-970000", held for 2.331580167s
	W0318 05:14:16.569236   22645 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:16.577891   22645 out.go:177] 
	W0318 05:14:16.583005   22645 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:14:16.583044   22645 out.go:239] * 
	* 
	W0318 05:14:16.585644   22645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:14:16.594832   22645 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.819506583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-970000" primary control-plane node in "enable-default-cni-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:14:18.871744   22759 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:14:18.871891   22759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:18.871897   22759 out.go:304] Setting ErrFile to fd 2...
	I0318 05:14:18.871899   22759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:18.872027   22759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:14:18.873089   22759 out.go:298] Setting JSON to false
	I0318 05:14:18.889165   22759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11631,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:14:18.889214   22759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:14:18.895701   22759 out.go:177] * [enable-default-cni-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:14:18.903624   22759 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:14:18.903658   22759 notify.go:220] Checking for updates...
	I0318 05:14:18.911688   22759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:14:18.914688   22759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:14:18.917738   22759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:14:18.920729   22759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:14:18.923733   22759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:14:18.927073   22759 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:18.927145   22759 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:18.927196   22759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:14:18.931737   22759 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:14:18.938653   22759 start.go:297] selected driver: qemu2
	I0318 05:14:18.938658   22759 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:14:18.938665   22759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:14:18.940907   22759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:14:18.943716   22759 out.go:177] * Automatically selected the socket_vmnet network
	E0318 05:14:18.946706   22759 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 05:14:18.946723   22759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:14:18.946776   22759 cni.go:84] Creating CNI manager for "bridge"
	I0318 05:14:18.946784   22759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:14:18.946837   22759 start.go:340] cluster config:
	{Name:enable-default-cni-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:14:18.951547   22759 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:14:18.959563   22759 out.go:177] * Starting "enable-default-cni-970000" primary control-plane node in "enable-default-cni-970000" cluster
	I0318 05:14:18.963700   22759 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:14:18.963717   22759 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:14:18.963731   22759 cache.go:56] Caching tarball of preloaded images
	I0318 05:14:18.963794   22759 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:14:18.963802   22759 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:14:18.963875   22759 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/enable-default-cni-970000/config.json ...
	I0318 05:14:18.963887   22759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/enable-default-cni-970000/config.json: {Name:mkd39cda191c2fcfa93602ec34b2c24cd0e0b334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:14:18.964119   22759 start.go:360] acquireMachinesLock for enable-default-cni-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:18.964154   22759 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "enable-default-cni-970000"
	I0318 05:14:18.964168   22759 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:18.964201   22759 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:18.971664   22759 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:18.990113   22759 start.go:159] libmachine.API.Create for "enable-default-cni-970000" (driver="qemu2")
	I0318 05:14:18.990146   22759 client.go:168] LocalClient.Create starting
	I0318 05:14:18.990211   22759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:18.990246   22759 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:18.990255   22759 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:18.990300   22759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:18.990325   22759 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:18.990333   22759 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:18.990727   22759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:19.138795   22759 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:19.221123   22759 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:19.221133   22759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:19.221327   22759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:19.233668   22759 main.go:141] libmachine: STDOUT: 
	I0318 05:14:19.233688   22759 main.go:141] libmachine: STDERR: 
	I0318 05:14:19.233745   22759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2 +20000M
	I0318 05:14:19.244327   22759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:19.244354   22759 main.go:141] libmachine: STDERR: 
	I0318 05:14:19.244368   22759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:19.244371   22759 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:19.244398   22759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b3:86:77:48:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:19.246223   22759 main.go:141] libmachine: STDOUT: 
	I0318 05:14:19.246240   22759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:19.246258   22759 client.go:171] duration metric: took 256.115416ms to LocalClient.Create
	I0318 05:14:21.248438   22759 start.go:128] duration metric: took 2.2842905s to createHost
	I0318 05:14:21.248533   22759 start.go:83] releasing machines lock for "enable-default-cni-970000", held for 2.284446209s
	W0318 05:14:21.248628   22759 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:21.258464   22759 out.go:177] * Deleting "enable-default-cni-970000" in qemu2 ...
	W0318 05:14:21.290048   22759 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:21.290086   22759 start.go:728] Will try again in 5 seconds ...
	I0318 05:14:26.292080   22759 start.go:360] acquireMachinesLock for enable-default-cni-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:26.292489   22759 start.go:364] duration metric: took 301.417µs to acquireMachinesLock for "enable-default-cni-970000"
	I0318 05:14:26.292631   22759 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:26.292906   22759 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:26.301455   22759 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:26.350349   22759 start.go:159] libmachine.API.Create for "enable-default-cni-970000" (driver="qemu2")
	I0318 05:14:26.350399   22759 client.go:168] LocalClient.Create starting
	I0318 05:14:26.350496   22759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:26.350552   22759 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:26.350566   22759 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:26.350623   22759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:26.350668   22759 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:26.350678   22759 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:26.351170   22759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:26.502873   22759 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:26.592141   22759 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:26.592151   22759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:26.592338   22759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:26.604778   22759 main.go:141] libmachine: STDOUT: 
	I0318 05:14:26.604883   22759 main.go:141] libmachine: STDERR: 
	I0318 05:14:26.604964   22759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2 +20000M
	I0318 05:14:26.615599   22759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:26.615618   22759 main.go:141] libmachine: STDERR: 
	I0318 05:14:26.615630   22759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:26.615635   22759 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:26.615667   22759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e0:2b:25:84:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/enable-default-cni-970000/disk.qcow2
	I0318 05:14:26.617349   22759 main.go:141] libmachine: STDOUT: 
	I0318 05:14:26.617366   22759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:26.617379   22759 client.go:171] duration metric: took 266.982042ms to LocalClient.Create
	I0318 05:14:28.619486   22759 start.go:128] duration metric: took 2.326622833s to createHost
	I0318 05:14:28.619559   22759 start.go:83] releasing machines lock for "enable-default-cni-970000", held for 2.32710375s
	W0318 05:14:28.620004   22759 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:28.633628   22759 out.go:177] 
	W0318 05:14:28.636631   22759 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:14:28.636654   22759 out.go:239] * 
	* 
	W0318 05:14:28.639092   22759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:14:28.646623   22759 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.851220375s)

                                                
                                                
-- stdout --
	* [flannel-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-970000" primary control-plane node in "flannel-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:14:30.914793   22869 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:14:30.914933   22869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:30.914936   22869 out.go:304] Setting ErrFile to fd 2...
	I0318 05:14:30.914939   22869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:30.915059   22869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:14:30.916131   22869 out.go:298] Setting JSON to false
	I0318 05:14:30.932085   22869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11643,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:14:30.932159   22869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:14:30.939028   22869 out.go:177] * [flannel-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:14:30.946952   22869 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:14:30.951947   22869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:14:30.946962   22869 notify.go:220] Checking for updates...
	I0318 05:14:30.958920   22869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:14:30.961942   22869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:14:30.965014   22869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:14:30.967951   22869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:14:30.971340   22869 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:30.971408   22869 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:30.971458   22869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:14:30.975899   22869 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:14:30.982948   22869 start.go:297] selected driver: qemu2
	I0318 05:14:30.982953   22869 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:14:30.982961   22869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:14:30.985201   22869 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:14:30.987887   22869 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:14:30.991004   22869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:14:30.991062   22869 cni.go:84] Creating CNI manager for "flannel"
	I0318 05:14:30.991067   22869 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0318 05:14:30.991108   22869 start.go:340] cluster config:
	{Name:flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:14:30.995646   22869 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:14:31.002925   22869 out.go:177] * Starting "flannel-970000" primary control-plane node in "flannel-970000" cluster
	I0318 05:14:31.006948   22869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:14:31.006962   22869 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:14:31.006975   22869 cache.go:56] Caching tarball of preloaded images
	I0318 05:14:31.007035   22869 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:14:31.007041   22869 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:14:31.007119   22869 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/flannel-970000/config.json ...
	I0318 05:14:31.007131   22869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/flannel-970000/config.json: {Name:mk0550fe46818d6ad30794768baf91b424c4dba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:14:31.007416   22869 start.go:360] acquireMachinesLock for flannel-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:31.007450   22869 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "flannel-970000"
	I0318 05:14:31.007465   22869 start.go:93] Provisioning new machine with config: &{Name:flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:31.007492   22869 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:31.011882   22869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:31.029424   22869 start.go:159] libmachine.API.Create for "flannel-970000" (driver="qemu2")
	I0318 05:14:31.029449   22869 client.go:168] LocalClient.Create starting
	I0318 05:14:31.029506   22869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:31.029533   22869 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:31.029541   22869 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:31.029585   22869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:31.029607   22869 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:31.029615   22869 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:31.029989   22869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:31.169444   22869 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:31.329335   22869 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:31.329343   22869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:31.329520   22869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:31.341943   22869 main.go:141] libmachine: STDOUT: 
	I0318 05:14:31.341964   22869 main.go:141] libmachine: STDERR: 
	I0318 05:14:31.342021   22869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2 +20000M
	I0318 05:14:31.352710   22869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:31.352723   22869 main.go:141] libmachine: STDERR: 
	I0318 05:14:31.352732   22869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:31.352736   22869 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:31.352766   22869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2d:b2:92:e6:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:31.354455   22869 main.go:141] libmachine: STDOUT: 
	I0318 05:14:31.354470   22869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:31.354489   22869 client.go:171] duration metric: took 325.046084ms to LocalClient.Create
	I0318 05:14:33.356628   22869 start.go:128] duration metric: took 2.349192292s to createHost
	I0318 05:14:33.356700   22869 start.go:83] releasing machines lock for "flannel-970000", held for 2.349319125s
	W0318 05:14:33.356813   22869 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:33.371748   22869 out.go:177] * Deleting "flannel-970000" in qemu2 ...
	W0318 05:14:33.396740   22869 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:33.396769   22869 start.go:728] Will try again in 5 seconds ...
	I0318 05:14:38.398840   22869 start.go:360] acquireMachinesLock for flannel-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:38.399220   22869 start.go:364] duration metric: took 299.917µs to acquireMachinesLock for "flannel-970000"
	I0318 05:14:38.399376   22869 start.go:93] Provisioning new machine with config: &{Name:flannel-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:38.399587   22869 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:38.407935   22869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:38.460444   22869 start.go:159] libmachine.API.Create for "flannel-970000" (driver="qemu2")
	I0318 05:14:38.460525   22869 client.go:168] LocalClient.Create starting
	I0318 05:14:38.460683   22869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:38.460746   22869 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:38.460761   22869 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:38.460824   22869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:38.460865   22869 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:38.460877   22869 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:38.461396   22869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:38.616036   22869 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:38.664550   22869 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:38.664558   22869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:38.664757   22869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:38.677279   22869 main.go:141] libmachine: STDOUT: 
	I0318 05:14:38.677301   22869 main.go:141] libmachine: STDERR: 
	I0318 05:14:38.677356   22869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2 +20000M
	I0318 05:14:38.687910   22869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:38.687927   22869 main.go:141] libmachine: STDERR: 
	I0318 05:14:38.687947   22869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:38.687951   22869 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:38.687997   22869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:03:cf:a8:c0:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/flannel-970000/disk.qcow2
	I0318 05:14:38.689700   22869 main.go:141] libmachine: STDOUT: 
	I0318 05:14:38.689715   22869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:38.689728   22869 client.go:171] duration metric: took 229.185417ms to LocalClient.Create
	I0318 05:14:40.691839   22869 start.go:128] duration metric: took 2.292297625s to createHost
	I0318 05:14:40.691889   22869 start.go:83] releasing machines lock for "flannel-970000", held for 2.292720083s
	W0318 05:14:40.692253   22869 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:40.706962   22869 out.go:177] 
	W0318 05:14:40.710050   22869 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:14:40.710074   22869 out.go:239] * 
	* 
	W0318 05:14:40.712648   22869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:14:40.719929   22869 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.851572875s)

                                                
                                                
-- stdout --
	* [bridge-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-970000" primary control-plane node in "bridge-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:14:43.198705   22990 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:14:43.198839   22990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:43.198843   22990 out.go:304] Setting ErrFile to fd 2...
	I0318 05:14:43.198845   22990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:43.198974   22990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:14:43.200031   22990 out.go:298] Setting JSON to false
	I0318 05:14:43.215988   22990 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11656,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:14:43.216049   22990 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:14:43.222212   22990 out.go:177] * [bridge-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:14:43.231445   22990 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:14:43.231510   22990 notify.go:220] Checking for updates...
	I0318 05:14:43.238374   22990 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:14:43.241435   22990 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:14:43.245427   22990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:14:43.248414   22990 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:14:43.251450   22990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:14:43.254766   22990 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:43.254836   22990 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:43.254887   22990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:14:43.259473   22990 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:14:43.266337   22990 start.go:297] selected driver: qemu2
	I0318 05:14:43.266343   22990 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:14:43.266349   22990 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:14:43.268623   22990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:14:43.272420   22990 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:14:43.276461   22990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:14:43.276509   22990 cni.go:84] Creating CNI manager for "bridge"
	I0318 05:14:43.276513   22990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:14:43.276547   22990 start.go:340] cluster config:
	{Name:bridge-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:14:43.281198   22990 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:14:43.289438   22990 out.go:177] * Starting "bridge-970000" primary control-plane node in "bridge-970000" cluster
	I0318 05:14:43.293324   22990 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:14:43.293341   22990 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:14:43.293358   22990 cache.go:56] Caching tarball of preloaded images
	I0318 05:14:43.293424   22990 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:14:43.293430   22990 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:14:43.293501   22990 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/bridge-970000/config.json ...
	I0318 05:14:43.293514   22990 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/bridge-970000/config.json: {Name:mk7a738460cbfb7caab7b534b611bf993fb8ad79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:14:43.293746   22990 start.go:360] acquireMachinesLock for bridge-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:43.293782   22990 start.go:364] duration metric: took 29µs to acquireMachinesLock for "bridge-970000"
	I0318 05:14:43.293796   22990 start.go:93] Provisioning new machine with config: &{Name:bridge-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:43.293829   22990 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:43.298399   22990 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:43.316763   22990 start.go:159] libmachine.API.Create for "bridge-970000" (driver="qemu2")
	I0318 05:14:43.316794   22990 client.go:168] LocalClient.Create starting
	I0318 05:14:43.316864   22990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:43.316896   22990 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:43.316906   22990 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:43.316950   22990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:43.316979   22990 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:43.316989   22990 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:43.317350   22990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:43.458741   22990 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:43.530398   22990 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:43.530403   22990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:43.530596   22990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:43.542925   22990 main.go:141] libmachine: STDOUT: 
	I0318 05:14:43.542951   22990 main.go:141] libmachine: STDERR: 
	I0318 05:14:43.543000   22990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2 +20000M
	I0318 05:14:43.554132   22990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:43.554152   22990 main.go:141] libmachine: STDERR: 
	I0318 05:14:43.554171   22990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:43.554175   22990 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:43.554203   22990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:56:b1:cf:19:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:43.555990   22990 main.go:141] libmachine: STDOUT: 
	I0318 05:14:43.556004   22990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:43.556027   22990 client.go:171] duration metric: took 239.236208ms to LocalClient.Create
	I0318 05:14:45.558209   22990 start.go:128] duration metric: took 2.2644325s to createHost
	I0318 05:14:45.558361   22990 start.go:83] releasing machines lock for "bridge-970000", held for 2.2645815s
	W0318 05:14:45.558428   22990 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:45.569586   22990 out.go:177] * Deleting "bridge-970000" in qemu2 ...
	W0318 05:14:45.597061   22990 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:45.597092   22990 start.go:728] Will try again in 5 seconds ...
	I0318 05:14:50.597896   22990 start.go:360] acquireMachinesLock for bridge-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:50.598334   22990 start.go:364] duration metric: took 328.5µs to acquireMachinesLock for "bridge-970000"
	I0318 05:14:50.598437   22990 start.go:93] Provisioning new machine with config: &{Name:bridge-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:50.598799   22990 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:50.612347   22990 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:50.661860   22990 start.go:159] libmachine.API.Create for "bridge-970000" (driver="qemu2")
	I0318 05:14:50.661913   22990 client.go:168] LocalClient.Create starting
	I0318 05:14:50.662016   22990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:50.662069   22990 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:50.662083   22990 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:50.662158   22990 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:50.662201   22990 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:50.662211   22990 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:50.662861   22990 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:50.825011   22990 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:50.941170   22990 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:50.941178   22990 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:50.941365   22990 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:50.953971   22990 main.go:141] libmachine: STDOUT: 
	I0318 05:14:50.953996   22990 main.go:141] libmachine: STDERR: 
	I0318 05:14:50.954053   22990 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2 +20000M
	I0318 05:14:50.965212   22990 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:50.965232   22990 main.go:141] libmachine: STDERR: 
	I0318 05:14:50.965245   22990 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:50.965252   22990 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:50.965289   22990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d5:44:8a:cd:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/bridge-970000/disk.qcow2
	I0318 05:14:50.967082   22990 main.go:141] libmachine: STDOUT: 
	I0318 05:14:50.967099   22990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:50.967112   22990 client.go:171] duration metric: took 305.203166ms to LocalClient.Create
	I0318 05:14:52.969369   22990 start.go:128] duration metric: took 2.3706185s to createHost
	I0318 05:14:52.969430   22990 start.go:83] releasing machines lock for "bridge-970000", held for 2.371145458s
	W0318 05:14:52.969722   22990 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:52.984521   22990 out.go:177] 
	W0318 05:14:52.989433   22990 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:14:52.989470   22990 out.go:239] * 
	* 
	W0318 05:14:52.991824   22990 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:14:53.005272   22990 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-970000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.876528458s)

                                                
                                                
-- stdout --
	* [kubenet-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-970000" primary control-plane node in "kubenet-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:14:55.329326   23101 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:14:55.329470   23101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:55.329474   23101 out.go:304] Setting ErrFile to fd 2...
	I0318 05:14:55.329476   23101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:14:55.329603   23101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:14:55.330646   23101 out.go:298] Setting JSON to false
	I0318 05:14:55.346632   23101 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11668,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:14:55.346695   23101 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:14:55.352550   23101 out.go:177] * [kubenet-970000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:14:55.360677   23101 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:14:55.360739   23101 notify.go:220] Checking for updates...
	I0318 05:14:55.367622   23101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:14:55.370671   23101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:14:55.373529   23101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:14:55.376645   23101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:14:55.383468   23101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:14:55.387035   23101 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:55.387118   23101 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:14:55.387171   23101 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:14:55.391633   23101 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:14:55.396574   23101 start.go:297] selected driver: qemu2
	I0318 05:14:55.396579   23101 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:14:55.396585   23101 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:14:55.398849   23101 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:14:55.401635   23101 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:14:55.404772   23101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:14:55.404822   23101 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0318 05:14:55.404888   23101 start.go:340] cluster config:
	{Name:kubenet-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:14:55.409610   23101 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:14:55.416605   23101 out.go:177] * Starting "kubenet-970000" primary control-plane node in "kubenet-970000" cluster
	I0318 05:14:55.420520   23101 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:14:55.420537   23101 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:14:55.420548   23101 cache.go:56] Caching tarball of preloaded images
	I0318 05:14:55.420612   23101 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:14:55.420619   23101 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:14:55.420692   23101 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kubenet-970000/config.json ...
	I0318 05:14:55.420705   23101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/kubenet-970000/config.json: {Name:mk1516df12d58c39d8d145d627db864ef0c28a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:14:55.420922   23101 start.go:360] acquireMachinesLock for kubenet-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:14:55.420955   23101 start.go:364] duration metric: took 26.959µs to acquireMachinesLock for "kubenet-970000"
	I0318 05:14:55.420968   23101 start.go:93] Provisioning new machine with config: &{Name:kubenet-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:14:55.421001   23101 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:14:55.429588   23101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:14:55.447370   23101 start.go:159] libmachine.API.Create for "kubenet-970000" (driver="qemu2")
	I0318 05:14:55.447405   23101 client.go:168] LocalClient.Create starting
	I0318 05:14:55.447474   23101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:14:55.447503   23101 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:55.447516   23101 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:55.447566   23101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:14:55.447590   23101 main.go:141] libmachine: Decoding PEM data...
	I0318 05:14:55.447599   23101 main.go:141] libmachine: Parsing certificate...
	I0318 05:14:55.447967   23101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:14:55.590216   23101 main.go:141] libmachine: Creating SSH key...
	I0318 05:14:55.679622   23101 main.go:141] libmachine: Creating Disk image...
	I0318 05:14:55.679630   23101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:14:55.679813   23101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:14:55.692187   23101 main.go:141] libmachine: STDOUT: 
	I0318 05:14:55.692204   23101 main.go:141] libmachine: STDERR: 
	I0318 05:14:55.692287   23101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2 +20000M
	I0318 05:14:55.702896   23101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:14:55.702911   23101 main.go:141] libmachine: STDERR: 
	I0318 05:14:55.702927   23101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:14:55.702931   23101 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:14:55.702961   23101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f4:dc:a6:8e:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:14:55.704689   23101 main.go:141] libmachine: STDOUT: 
	I0318 05:14:55.704703   23101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:14:55.704719   23101 client.go:171] duration metric: took 257.315833ms to LocalClient.Create
	I0318 05:14:57.706911   23101 start.go:128] duration metric: took 2.285958334s to createHost
	I0318 05:14:57.706966   23101 start.go:83] releasing machines lock for "kubenet-970000", held for 2.286077708s
	W0318 05:14:57.707025   23101 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:57.718065   23101 out.go:177] * Deleting "kubenet-970000" in qemu2 ...
	W0318 05:14:57.749201   23101 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:14:57.749230   23101 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:02.749409   23101 start.go:360] acquireMachinesLock for kubenet-970000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:02.749838   23101 start.go:364] duration metric: took 344.083µs to acquireMachinesLock for "kubenet-970000"
	I0318 05:15:02.749969   23101 start.go:93] Provisioning new machine with config: &{Name:kubenet-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:02.750238   23101 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:02.758882   23101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 05:15:02.808466   23101 start.go:159] libmachine.API.Create for "kubenet-970000" (driver="qemu2")
	I0318 05:15:02.808522   23101 client.go:168] LocalClient.Create starting
	I0318 05:15:02.808633   23101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:02.808694   23101 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:02.808711   23101 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:02.808777   23101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:02.808825   23101 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:02.808841   23101 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:02.809415   23101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:02.962870   23101 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:03.103843   23101 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:03.103850   23101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:03.104055   23101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:15:03.116753   23101 main.go:141] libmachine: STDOUT: 
	I0318 05:15:03.116776   23101 main.go:141] libmachine: STDERR: 
	I0318 05:15:03.116840   23101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2 +20000M
	I0318 05:15:03.127339   23101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:03.127355   23101 main.go:141] libmachine: STDERR: 
	I0318 05:15:03.127368   23101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:15:03.127373   23101 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:03.127428   23101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b4:ae:2b:77:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/kubenet-970000/disk.qcow2
	I0318 05:15:03.129144   23101 main.go:141] libmachine: STDOUT: 
	I0318 05:15:03.129169   23101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:03.129181   23101 client.go:171] duration metric: took 320.665875ms to LocalClient.Create
	I0318 05:15:05.131327   23101 start.go:128] duration metric: took 2.381128458s to createHost
	I0318 05:15:05.131432   23101 start.go:83] releasing machines lock for "kubenet-970000", held for 2.381649s
	W0318 05:15:05.131869   23101 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:05.144653   23101 out.go:177] 
	W0318 05:15:05.147816   23101 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:05.147843   23101 out.go:239] * 
	* 
	W0318 05:15:05.150476   23101 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:05.161548   23101 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.0823845s)

                                                
                                                
-- stdout --
	* [old-k8s-version-431000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-431000" primary control-plane node in "old-k8s-version-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:07.480954   23216 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:07.481071   23216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:07.481073   23216 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:07.481076   23216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:07.481222   23216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:07.482351   23216 out.go:298] Setting JSON to false
	I0318 05:15:07.498589   23216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11680,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:15:07.498672   23216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:15:07.505427   23216 out.go:177] * [old-k8s-version-431000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:15:07.513349   23216 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:15:07.517425   23216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:15:07.513389   23216 notify.go:220] Checking for updates...
	I0318 05:15:07.524337   23216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:15:07.528297   23216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:15:07.531329   23216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:15:07.534343   23216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:15:07.537774   23216 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:07.537846   23216 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:07.537903   23216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:15:07.541288   23216 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:15:07.548314   23216 start.go:297] selected driver: qemu2
	I0318 05:15:07.548321   23216 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:15:07.548328   23216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:15:07.550633   23216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:15:07.554291   23216 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:15:07.558372   23216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:15:07.558423   23216 cni.go:84] Creating CNI manager for ""
	I0318 05:15:07.558431   23216 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 05:15:07.558463   23216 start.go:340] cluster config:
	{Name:old-k8s-version-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:07.563122   23216 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:07.571309   23216 out.go:177] * Starting "old-k8s-version-431000" primary control-plane node in "old-k8s-version-431000" cluster
	I0318 05:15:07.575439   23216 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 05:15:07.575454   23216 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 05:15:07.575464   23216 cache.go:56] Caching tarball of preloaded images
	I0318 05:15:07.575520   23216 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:15:07.575526   23216 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 05:15:07.575626   23216 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/old-k8s-version-431000/config.json ...
	I0318 05:15:07.575638   23216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/old-k8s-version-431000/config.json: {Name:mkc9762dc90b0be36ecdeecbff532fd991450365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:15:07.575868   23216 start.go:360] acquireMachinesLock for old-k8s-version-431000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:07.575904   23216 start.go:364] duration metric: took 27µs to acquireMachinesLock for "old-k8s-version-431000"
	I0318 05:15:07.575918   23216 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:07.575947   23216 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:07.584365   23216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:07.603056   23216 start.go:159] libmachine.API.Create for "old-k8s-version-431000" (driver="qemu2")
	I0318 05:15:07.603089   23216 client.go:168] LocalClient.Create starting
	I0318 05:15:07.603157   23216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:07.603194   23216 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:07.603207   23216 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:07.603255   23216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:07.603279   23216 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:07.603285   23216 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:07.603736   23216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:07.744221   23216 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:07.912578   23216 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:07.912585   23216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:07.912782   23216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:07.925237   23216 main.go:141] libmachine: STDOUT: 
	I0318 05:15:07.925275   23216 main.go:141] libmachine: STDERR: 
	I0318 05:15:07.925333   23216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2 +20000M
	I0318 05:15:07.935942   23216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:07.935966   23216 main.go:141] libmachine: STDERR: 
	I0318 05:15:07.935981   23216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:07.935985   23216 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:07.936020   23216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:70:e1:d3:66:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:07.937745   23216 main.go:141] libmachine: STDOUT: 
	I0318 05:15:07.937767   23216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:07.937784   23216 client.go:171] duration metric: took 334.702083ms to LocalClient.Create
	I0318 05:15:09.939906   23216 start.go:128] duration metric: took 2.364015292s to createHost
	I0318 05:15:09.939968   23216 start.go:83] releasing machines lock for "old-k8s-version-431000", held for 2.364134208s
	W0318 05:15:09.940082   23216 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:09.955076   23216 out.go:177] * Deleting "old-k8s-version-431000" in qemu2 ...
	W0318 05:15:09.983448   23216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:09.983478   23216 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:14.984495   23216 start.go:360] acquireMachinesLock for old-k8s-version-431000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:14.985010   23216 start.go:364] duration metric: took 355.75µs to acquireMachinesLock for "old-k8s-version-431000"
	I0318 05:15:14.985134   23216 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:14.985444   23216 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:14.994037   23216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:15.043255   23216 start.go:159] libmachine.API.Create for "old-k8s-version-431000" (driver="qemu2")
	I0318 05:15:15.043309   23216 client.go:168] LocalClient.Create starting
	I0318 05:15:15.043444   23216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:15.043516   23216 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:15.043534   23216 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:15.043601   23216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:15.043642   23216 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:15.043653   23216 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:15.044231   23216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:15.196760   23216 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:15.461829   23216 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:15.461838   23216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:15.462102   23216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:15.475082   23216 main.go:141] libmachine: STDOUT: 
	I0318 05:15:15.475110   23216 main.go:141] libmachine: STDERR: 
	I0318 05:15:15.475182   23216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2 +20000M
	I0318 05:15:15.485894   23216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:15.485912   23216 main.go:141] libmachine: STDERR: 
	I0318 05:15:15.485925   23216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:15.485931   23216 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:15.485971   23216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:53:bf:67:f2:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:15.487647   23216 main.go:141] libmachine: STDOUT: 
	I0318 05:15:15.487680   23216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:15.487693   23216 client.go:171] duration metric: took 444.392083ms to LocalClient.Create
	I0318 05:15:17.489358   23216 start.go:128] duration metric: took 2.503953333s to createHost
	I0318 05:15:17.489427   23216 start.go:83] releasing machines lock for "old-k8s-version-431000", held for 2.504466458s
	W0318 05:15:17.489799   23216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:17.500281   23216 out.go:177] 
	W0318 05:15:17.504426   23216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:17.504466   23216 out.go:239] * 
	* 
	W0318 05:15:17.507020   23216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:17.517201   23216 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (69.488167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-431000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-431000 create -f testdata/busybox.yaml: exit status 1 (29.009458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-431000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.278375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.149916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-431000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-431000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-431000 describe deploy/metrics-server -n kube-system: exit status 1 (26.892375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-431000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.880958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193035333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-431000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-431000" primary control-plane node in "old-k8s-version-431000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:21.256350   23264 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:21.256465   23264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:21.256468   23264 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:21.256470   23264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:21.256597   23264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:21.257556   23264 out.go:298] Setting JSON to false
	I0318 05:15:21.273619   23264 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11694,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:15:21.273674   23264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:15:21.277491   23264 out.go:177] * [old-k8s-version-431000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:15:21.284535   23264 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:15:21.287554   23264 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:15:21.284607   23264 notify.go:220] Checking for updates...
	I0318 05:15:21.293534   23264 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:15:21.296527   23264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:15:21.299565   23264 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:15:21.302544   23264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:15:21.304423   23264 config.go:182] Loaded profile config "old-k8s-version-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 05:15:21.307498   23264 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 05:15:21.310572   23264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:15:21.315339   23264 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:15:21.322572   23264 start.go:297] selected driver: qemu2
	I0318 05:15:21.322579   23264 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:21.322642   23264 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:15:21.324917   23264 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:15:21.324968   23264 cni.go:84] Creating CNI manager for ""
	I0318 05:15:21.324976   23264 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 05:15:21.325000   23264 start.go:340] cluster config:
	{Name:old-k8s-version-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-431000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:21.329424   23264 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:21.336542   23264 out.go:177] * Starting "old-k8s-version-431000" primary control-plane node in "old-k8s-version-431000" cluster
	I0318 05:15:21.340531   23264 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 05:15:21.340546   23264 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 05:15:21.340553   23264 cache.go:56] Caching tarball of preloaded images
	I0318 05:15:21.340627   23264 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:15:21.340633   23264 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 05:15:21.340715   23264 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/old-k8s-version-431000/config.json ...
	I0318 05:15:21.341236   23264 start.go:360] acquireMachinesLock for old-k8s-version-431000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:21.341264   23264 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "old-k8s-version-431000"
	I0318 05:15:21.341274   23264 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:15:21.341294   23264 fix.go:54] fixHost starting: 
	I0318 05:15:21.341429   23264 fix.go:112] recreateIfNeeded on old-k8s-version-431000: state=Stopped err=<nil>
	W0318 05:15:21.341440   23264 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:15:21.345565   23264 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-431000" ...
	I0318 05:15:21.353463   23264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:53:bf:67:f2:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:21.355625   23264 main.go:141] libmachine: STDOUT: 
	I0318 05:15:21.355648   23264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:21.355678   23264 fix.go:56] duration metric: took 14.383333ms for fixHost
	I0318 05:15:21.355684   23264 start.go:83] releasing machines lock for "old-k8s-version-431000", held for 14.416375ms
	W0318 05:15:21.355691   23264 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:21.355745   23264 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:21.355751   23264 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:26.357732   23264 start.go:360] acquireMachinesLock for old-k8s-version-431000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:26.358127   23264 start.go:364] duration metric: took 320.208µs to acquireMachinesLock for "old-k8s-version-431000"
	I0318 05:15:26.358254   23264 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:15:26.358274   23264 fix.go:54] fixHost starting: 
	I0318 05:15:26.358966   23264 fix.go:112] recreateIfNeeded on old-k8s-version-431000: state=Stopped err=<nil>
	W0318 05:15:26.358991   23264 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:15:26.367311   23264 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-431000" ...
	I0318 05:15:26.371504   23264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:53:bf:67:f2:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/old-k8s-version-431000/disk.qcow2
	I0318 05:15:26.381076   23264 main.go:141] libmachine: STDOUT: 
	I0318 05:15:26.381142   23264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:26.381197   23264 fix.go:56] duration metric: took 22.925625ms for fixHost
	I0318 05:15:26.381220   23264 start.go:83] releasing machines lock for "old-k8s-version-431000", held for 23.048292ms
	W0318 05:15:26.381393   23264 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:26.389350   23264 out.go:177] 
	W0318 05:15:26.393398   23264 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:26.393437   23264 out.go:239] * 
	* 
	W0318 05:15:26.395831   23264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:26.404234   23264 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-431000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (71.168958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-431000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (34.08625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-431000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.48ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.055292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-431000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-431000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-431000 --alsologtostderr -v=1: exit status 83 (44.047084ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-431000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:26.691898   23283 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:26.692259   23283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:26.692263   23283 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:26.692265   23283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:26.692438   23283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:26.692647   23283 out.go:298] Setting JSON to false
	I0318 05:15:26.692656   23283 mustload.go:65] Loading cluster: old-k8s-version-431000
	I0318 05:15:26.692855   23283 config.go:182] Loaded profile config "old-k8s-version-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 05:15:26.697241   23283 out.go:177] * The control-plane node old-k8s-version-431000 host is not running: state=Stopped
	I0318 05:15:26.700323   23283 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-431000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-431000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (31.941458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (32.417334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.765128417s)

                                                
                                                
-- stdout --
	* [no-preload-051000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-051000" primary control-plane node in "no-preload-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:27.175257   23306 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:27.175369   23306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:27.175373   23306 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:27.175375   23306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:27.175500   23306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:27.176595   23306 out.go:298] Setting JSON to false
	I0318 05:15:27.192881   23306 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11700,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:15:27.192946   23306 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:15:27.198126   23306 out.go:177] * [no-preload-051000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:15:27.204048   23306 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:15:27.208013   23306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:15:27.204094   23306 notify.go:220] Checking for updates...
	I0318 05:15:27.213998   23306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:15:27.217090   23306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:15:27.220038   23306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:15:27.223059   23306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:15:27.226428   23306 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:27.226489   23306 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:27.226550   23306 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:15:27.229919   23306 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:15:27.237030   23306 start.go:297] selected driver: qemu2
	I0318 05:15:27.237036   23306 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:15:27.237044   23306 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:15:27.239309   23306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:15:27.240888   23306 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:15:27.244083   23306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:15:27.244120   23306 cni.go:84] Creating CNI manager for ""
	I0318 05:15:27.244128   23306 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:15:27.244136   23306 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:15:27.244167   23306 start.go:340] cluster config:
	{Name:no-preload-051000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:27.248642   23306 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.256041   23306 out.go:177] * Starting "no-preload-051000" primary control-plane node in "no-preload-051000" cluster
	I0318 05:15:27.260100   23306 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 05:15:27.260181   23306 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/no-preload-051000/config.json ...
	I0318 05:15:27.260194   23306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/no-preload-051000/config.json: {Name:mkd8f5f3156d04e30183c558932e594405c1fa39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:15:27.260234   23306 cache.go:107] acquiring lock: {Name:mk39bd09ca568613e74095f6d80a9acef2e49dbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260237   23306 cache.go:107] acquiring lock: {Name:mk93393bbaeee146fddf4371287dc32fefbcee18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260305   23306 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 05:15:27.260312   23306 cache.go:107] acquiring lock: {Name:mk971db6be6f1135f1cfb55a38ead218f7f935cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260326   23306 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.333µs
	I0318 05:15:27.260334   23306 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 05:15:27.260346   23306 cache.go:107] acquiring lock: {Name:mk29756f4e78345d2224f3b4522b649e0762335f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260364   23306 cache.go:107] acquiring lock: {Name:mk66a69b2e880d199d9ba166413f715ca5c6886a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260365   23306 cache.go:107] acquiring lock: {Name:mk7d25eb6281aba53b9a4923e5d2b808a4981640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260244   23306 cache.go:107] acquiring lock: {Name:mk61ac10eb31d93997b77bda9629f707a0547d56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260479   23306 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 05:15:27.260488   23306 cache.go:107] acquiring lock: {Name:mk2e91e257b336427942dc5dc1d32af85666ff84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:27.260489   23306 start.go:360] acquireMachinesLock for no-preload-051000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:27.260567   23306 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 05:15:27.260638   23306 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 05:15:27.260655   23306 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 05:15:27.260526   23306 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 05:15:27.260682   23306 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 05:15:27.260585   23306 start.go:364] duration metric: took 70.875µs to acquireMachinesLock for "no-preload-051000"
	I0318 05:15:27.260683   23306 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 05:15:27.260746   23306 start.go:93] Provisioning new machine with config: &{Name:no-preload-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:27.260850   23306 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:27.269006   23306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:27.274205   23306 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 05:15:27.274219   23306 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 05:15:27.274281   23306 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 05:15:27.274901   23306 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 05:15:27.286434   23306 start.go:159] libmachine.API.Create for "no-preload-051000" (driver="qemu2")
	I0318 05:15:27.286456   23306 client.go:168] LocalClient.Create starting
	I0318 05:15:27.286571   23306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:27.286610   23306 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:27.286621   23306 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:27.286671   23306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:27.286699   23306 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:27.286705   23306 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:27.287037   23306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:27.288291   23306 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 05:15:27.288361   23306 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 05:15:27.288477   23306 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 05:15:27.430365   23306 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:27.496438   23306 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:27.496463   23306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:27.496673   23306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:27.509516   23306 main.go:141] libmachine: STDOUT: 
	I0318 05:15:27.509549   23306 main.go:141] libmachine: STDERR: 
	I0318 05:15:27.509615   23306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2 +20000M
	I0318 05:15:27.521975   23306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:27.522025   23306 main.go:141] libmachine: STDERR: 
	I0318 05:15:27.522138   23306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:27.522256   23306 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:27.522500   23306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f7:e0:26:bd:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:27.524396   23306 main.go:141] libmachine: STDOUT: 
	I0318 05:15:27.524420   23306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:27.524438   23306 client.go:171] duration metric: took 237.98425ms to LocalClient.Create
	I0318 05:15:29.194536   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0318 05:15:29.332356   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 05:15:29.332397   23306 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.072168916s
	I0318 05:15:29.332417   23306 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 05:15:29.363271   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 05:15:29.364266   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 05:15:29.369442   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 05:15:29.373909   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 05:15:29.383973   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 05:15:29.390863   23306 cache.go:162] opening:  /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 05:15:29.525242   23306 start.go:128] duration metric: took 2.264450334s to createHost
	I0318 05:15:29.525362   23306 start.go:83] releasing machines lock for "no-preload-051000", held for 2.26471975s
	W0318 05:15:29.525425   23306 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:29.542458   23306 out.go:177] * Deleting "no-preload-051000" in qemu2 ...
	W0318 05:15:29.565482   23306 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:29.565512   23306 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:32.597423   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 05:15:32.597473   23306 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.337282291s
	I0318 05:15:32.597499   23306 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 05:15:32.820750   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 05:15:32.820809   23306 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.560569291s
	I0318 05:15:32.820840   23306 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 05:15:33.184388   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 05:15:33.184443   23306 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 5.924269958s
	I0318 05:15:33.184471   23306 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 05:15:33.760648   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 05:15:33.760701   23306 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.500690291s
	I0318 05:15:33.760728   23306 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 05:15:34.041361   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 05:15:34.041415   23306 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.781403833s
	I0318 05:15:34.041452   23306 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 05:15:34.565563   23306 start.go:360] acquireMachinesLock for no-preload-051000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:34.565956   23306 start.go:364] duration metric: took 314.208µs to acquireMachinesLock for "no-preload-051000"
	I0318 05:15:34.566085   23306 start.go:93] Provisioning new machine with config: &{Name:no-preload-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:34.566446   23306 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:34.574118   23306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:34.622673   23306 start.go:159] libmachine.API.Create for "no-preload-051000" (driver="qemu2")
	I0318 05:15:34.622724   23306 client.go:168] LocalClient.Create starting
	I0318 05:15:34.622864   23306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:34.622933   23306 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:34.622953   23306 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:34.623007   23306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:34.623048   23306 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:34.623062   23306 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:34.623566   23306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:34.793823   23306 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:34.839415   23306 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:34.839420   23306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:34.839597   23306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:34.852294   23306 main.go:141] libmachine: STDOUT: 
	I0318 05:15:34.852316   23306 main.go:141] libmachine: STDERR: 
	I0318 05:15:34.852381   23306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2 +20000M
	I0318 05:15:34.863049   23306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:34.863065   23306 main.go:141] libmachine: STDERR: 
	I0318 05:15:34.863078   23306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:34.863083   23306 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:34.863133   23306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c1:8c:d9:f4:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:34.864912   23306 main.go:141] libmachine: STDOUT: 
	I0318 05:15:34.864930   23306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:34.864942   23306 client.go:171] duration metric: took 242.218625ms to LocalClient.Create
	I0318 05:15:35.715769   23306 cache.go:157] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 05:15:35.715831   23306 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 8.455759167s
	I0318 05:15:35.715856   23306 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 05:15:35.715927   23306 cache.go:87] Successfully saved all images to host disk.
	I0318 05:15:36.866126   23306 start.go:128] duration metric: took 2.299728167s to createHost
	I0318 05:15:36.866192   23306 start.go:83] releasing machines lock for "no-preload-051000", held for 2.300288458s
	W0318 05:15:36.866495   23306 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:36.879230   23306 out.go:177] 
	W0318 05:15:36.883278   23306 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:36.883315   23306 out.go:239] * 
	* 
	W0318 05:15:36.885913   23306 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:36.895181   23306 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (68.687625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-051000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-051000 create -f testdata/busybox.yaml: exit status 1 (29.068583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-051000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (33.1195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (32.12175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-051000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-051000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-051000 describe deploy/metrics-server -n kube-system: exit status 1 (26.446417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-051000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (33.117542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.198998041s)

                                                
                                                
-- stdout --
	* [no-preload-051000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-051000" primary control-plane node in "no-preload-051000" cluster
	* Restarting existing qemu2 VM for "no-preload-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:41.180140   23387 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:41.180278   23387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:41.180281   23387 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:41.180283   23387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:41.180416   23387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:41.181403   23387 out.go:298] Setting JSON to false
	I0318 05:15:41.197382   23387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11714,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:15:41.197444   23387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:15:41.202845   23387 out.go:177] * [no-preload-051000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:15:41.209688   23387 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:15:41.213772   23387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:15:41.209744   23387 notify.go:220] Checking for updates...
	I0318 05:15:41.220706   23387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:15:41.223788   23387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:15:41.226604   23387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:15:41.229731   23387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:15:41.233126   23387 config.go:182] Loaded profile config "no-preload-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 05:15:41.233407   23387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:15:41.236602   23387 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:15:41.243722   23387 start.go:297] selected driver: qemu2
	I0318 05:15:41.243728   23387 start.go:901] validating driver "qemu2" against &{Name:no-preload-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-051000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:41.243796   23387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:15:41.246159   23387 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:15:41.246209   23387 cni.go:84] Creating CNI manager for ""
	I0318 05:15:41.246216   23387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:15:41.246245   23387 start.go:340] cluster config:
	{Name:no-preload-051000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-051000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:41.250648   23387 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.266781   23387 out.go:177] * Starting "no-preload-051000" primary control-plane node in "no-preload-051000" cluster
	I0318 05:15:41.270736   23387 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 05:15:41.270836   23387 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/no-preload-051000/config.json ...
	I0318 05:15:41.270864   23387 cache.go:107] acquiring lock: {Name:mk39bd09ca568613e74095f6d80a9acef2e49dbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.270894   23387 cache.go:107] acquiring lock: {Name:mk93393bbaeee146fddf4371287dc32fefbcee18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.270914   23387 cache.go:107] acquiring lock: {Name:mk66a69b2e880d199d9ba166413f715ca5c6886a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.270948   23387 cache.go:107] acquiring lock: {Name:mk2e91e257b336427942dc5dc1d32af85666ff84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.270956   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 05:15:41.270965   23387 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.542µs
	I0318 05:15:41.270972   23387 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 05:15:41.270982   23387 cache.go:107] acquiring lock: {Name:mk61ac10eb31d93997b77bda9629f707a0547d56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.270992   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 05:15:41.270998   23387 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 54.334µs
	I0318 05:15:41.271003   23387 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 05:15:41.271010   23387 cache.go:107] acquiring lock: {Name:mk7d25eb6281aba53b9a4923e5d2b808a4981640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.271019   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 05:15:41.271031   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 05:15:41.271029   23387 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 146.25µs
	I0318 05:15:41.271035   23387 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 57.708µs
	I0318 05:15:41.271037   23387 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 05:15:41.271040   23387 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 05:15:41.271054   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 05:15:41.271057   23387 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 48.542µs
	I0318 05:15:41.271062   23387 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 05:15:41.271096   23387 cache.go:107] acquiring lock: {Name:mk29756f4e78345d2224f3b4522b649e0762335f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.271101   23387 cache.go:107] acquiring lock: {Name:mk971db6be6f1135f1cfb55a38ead218f7f935cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:41.271147   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 05:15:41.271156   23387 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 270.5µs
	I0318 05:15:41.271160   23387 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 05:15:41.271165   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 05:15:41.271172   23387 cache.go:115] /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 05:15:41.271169   23387 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 130.083µs
	I0318 05:15:41.271177   23387 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 05:15:41.271176   23387 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 116.417µs
	I0318 05:15:41.271184   23387 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 05:15:41.271189   23387 cache.go:87] Successfully saved all images to host disk.
	I0318 05:15:41.271236   23387 start.go:360] acquireMachinesLock for no-preload-051000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:41.271281   23387 start.go:364] duration metric: took 31.542µs to acquireMachinesLock for "no-preload-051000"
	I0318 05:15:41.271292   23387 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:15:41.271296   23387 fix.go:54] fixHost starting: 
	I0318 05:15:41.271428   23387 fix.go:112] recreateIfNeeded on no-preload-051000: state=Stopped err=<nil>
	W0318 05:15:41.271443   23387 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:15:41.279729   23387 out.go:177] * Restarting existing qemu2 VM for "no-preload-051000" ...
	I0318 05:15:41.283697   23387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c1:8c:d9:f4:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:41.286002   23387 main.go:141] libmachine: STDOUT: 
	I0318 05:15:41.286022   23387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:41.286053   23387 fix.go:56] duration metric: took 14.755833ms for fixHost
	I0318 05:15:41.286058   23387 start.go:83] releasing machines lock for "no-preload-051000", held for 14.772125ms
	W0318 05:15:41.286066   23387 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:41.286095   23387 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:41.286101   23387 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:46.288157   23387 start.go:360] acquireMachinesLock for no-preload-051000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:46.288532   23387 start.go:364] duration metric: took 292.833µs to acquireMachinesLock for "no-preload-051000"
	I0318 05:15:46.288688   23387 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:15:46.288712   23387 fix.go:54] fixHost starting: 
	I0318 05:15:46.289433   23387 fix.go:112] recreateIfNeeded on no-preload-051000: state=Stopped err=<nil>
	W0318 05:15:46.289461   23387 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:15:46.294022   23387 out.go:177] * Restarting existing qemu2 VM for "no-preload-051000" ...
	I0318 05:15:46.301127   23387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c1:8c:d9:f4:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/no-preload-051000/disk.qcow2
	I0318 05:15:46.310836   23387 main.go:141] libmachine: STDOUT: 
	I0318 05:15:46.310919   23387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:46.311000   23387 fix.go:56] duration metric: took 22.289292ms for fixHost
	I0318 05:15:46.311022   23387 start.go:83] releasing machines lock for "no-preload-051000", held for 22.467625ms
	W0318 05:15:46.311205   23387 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:46.318937   23387 out.go:177] 
	W0318 05:15:46.323045   23387 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:46.323121   23387 out.go:239] * 
	* 
	W0318 05:15:46.325477   23387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:46.338963   23387 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-051000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (70.560375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-051000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (33.918792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-051000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.112291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-051000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-051000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (31.708667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-051000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (32.237708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-051000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-051000 --alsologtostderr -v=1: exit status 83 (43.151208ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-051000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:46.618789   23409 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:46.618942   23409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:46.618945   23409 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:46.618948   23409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:46.619077   23409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:46.619304   23409 out.go:298] Setting JSON to false
	I0318 05:15:46.619313   23409 mustload.go:65] Loading cluster: no-preload-051000
	I0318 05:15:46.619506   23409 config.go:182] Loaded profile config "no-preload-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 05:15:46.623667   23409 out.go:177] * The control-plane node no-preload-051000 host is not running: state=Stopped
	I0318 05:15:46.626548   23409 out.go:177]   To start a cluster, run: "minikube start -p no-preload-051000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-051000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (32.52525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (32.714709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.893141375s)

                                                
                                                
-- stdout --
	* [embed-certs-613000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-613000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:15:47.096630   23432 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:15:47.096757   23432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:47.096760   23432 out.go:304] Setting ErrFile to fd 2...
	I0318 05:15:47.096762   23432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:15:47.096878   23432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:15:47.097896   23432 out.go:298] Setting JSON to false
	I0318 05:15:47.113862   23432 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11720,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:15:47.113927   23432 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:15:47.119498   23432 out.go:177] * [embed-certs-613000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:15:47.126495   23432 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:15:47.126550   23432 notify.go:220] Checking for updates...
	I0318 05:15:47.129557   23432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:15:47.133445   23432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:15:47.140461   23432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:15:47.143498   23432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:15:47.146405   23432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:15:47.149745   23432 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:47.149812   23432 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:15:47.149866   23432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:15:47.153495   23432 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:15:47.160491   23432 start.go:297] selected driver: qemu2
	I0318 05:15:47.160499   23432 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:15:47.160505   23432 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:15:47.162788   23432 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:15:47.167472   23432 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:15:47.170590   23432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:15:47.170631   23432 cni.go:84] Creating CNI manager for ""
	I0318 05:15:47.170641   23432 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:15:47.170648   23432 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:15:47.170700   23432 start.go:340] cluster config:
	{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:15:47.175434   23432 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:15:47.182380   23432 out.go:177] * Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	I0318 05:15:47.186437   23432 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:15:47.186456   23432 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:15:47.186470   23432 cache.go:56] Caching tarball of preloaded images
	I0318 05:15:47.186538   23432 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:15:47.186544   23432 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:15:47.186611   23432 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/embed-certs-613000/config.json ...
	I0318 05:15:47.186622   23432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/embed-certs-613000/config.json: {Name:mk1d20dada4707a7ea8d9d6bb97d4b0a1ba15cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:15:47.186914   23432 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:47.186950   23432 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "embed-certs-613000"
	I0318 05:15:47.186963   23432 start.go:93] Provisioning new machine with config: &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:47.187002   23432 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:47.190485   23432 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:47.207713   23432 start.go:159] libmachine.API.Create for "embed-certs-613000" (driver="qemu2")
	I0318 05:15:47.207737   23432 client.go:168] LocalClient.Create starting
	I0318 05:15:47.207795   23432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:47.207825   23432 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:47.207836   23432 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:47.207880   23432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:47.207905   23432 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:47.207910   23432 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:47.208310   23432 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:47.348705   23432 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:47.509548   23432 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:47.509556   23432 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:47.509744   23432 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:47.522201   23432 main.go:141] libmachine: STDOUT: 
	I0318 05:15:47.522222   23432 main.go:141] libmachine: STDERR: 
	I0318 05:15:47.522268   23432 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2 +20000M
	I0318 05:15:47.532967   23432 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:47.532985   23432 main.go:141] libmachine: STDERR: 
	I0318 05:15:47.533001   23432 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:47.533005   23432 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:47.533038   23432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:74:ff:47:b7:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:47.534758   23432 main.go:141] libmachine: STDOUT: 
	I0318 05:15:47.534774   23432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:47.534791   23432 client.go:171] duration metric: took 327.061125ms to LocalClient.Create
	I0318 05:15:49.535840   23432 start.go:128] duration metric: took 2.348894208s to createHost
	I0318 05:15:49.535901   23432 start.go:83] releasing machines lock for "embed-certs-613000", held for 2.349019875s
	W0318 05:15:49.536019   23432 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:49.553077   23432 out.go:177] * Deleting "embed-certs-613000" in qemu2 ...
	W0318 05:15:49.580678   23432 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:49.580710   23432 start.go:728] Will try again in 5 seconds ...
	I0318 05:15:54.582780   23432 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:15:54.583196   23432 start.go:364] duration metric: took 299.083µs to acquireMachinesLock for "embed-certs-613000"
	I0318 05:15:54.583361   23432 start.go:93] Provisioning new machine with config: &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:15:54.583639   23432 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:15:54.593205   23432 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:15:54.641722   23432 start.go:159] libmachine.API.Create for "embed-certs-613000" (driver="qemu2")
	I0318 05:15:54.641769   23432 client.go:168] LocalClient.Create starting
	I0318 05:15:54.641862   23432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:15:54.641919   23432 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:54.641936   23432 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:54.642000   23432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:15:54.642042   23432 main.go:141] libmachine: Decoding PEM data...
	I0318 05:15:54.642053   23432 main.go:141] libmachine: Parsing certificate...
	I0318 05:15:54.642573   23432 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:15:54.793998   23432 main.go:141] libmachine: Creating SSH key...
	I0318 05:15:54.887320   23432 main.go:141] libmachine: Creating Disk image...
	I0318 05:15:54.887332   23432 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:15:54.887518   23432 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:54.900060   23432 main.go:141] libmachine: STDOUT: 
	I0318 05:15:54.900082   23432 main.go:141] libmachine: STDERR: 
	I0318 05:15:54.900144   23432 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2 +20000M
	I0318 05:15:54.910970   23432 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:15:54.910993   23432 main.go:141] libmachine: STDERR: 
	I0318 05:15:54.911008   23432 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:54.911012   23432 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:15:54.911056   23432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d4:c0:31:28:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:15:54.912766   23432 main.go:141] libmachine: STDOUT: 
	I0318 05:15:54.912782   23432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:15:54.912794   23432 client.go:171] duration metric: took 271.027417ms to LocalClient.Create
	I0318 05:15:56.915066   23432 start.go:128] duration metric: took 2.331413541s to createHost
	I0318 05:15:56.915205   23432 start.go:83] releasing machines lock for "embed-certs-613000", held for 2.332060958s
	W0318 05:15:56.915706   23432 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:15:56.925334   23432 out.go:177] 
	W0318 05:15:56.932435   23432 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:15:56.932466   23432 out.go:239] * 
	* 
	W0318 05:15:56.934978   23432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:15:56.944276   23432 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (69.924667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-613000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-613000 create -f testdata/busybox.yaml: exit status 1 (28.326291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-613000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (32.293417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (31.928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-613000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system: exit status 1 (27.017541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (32.624959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.186108291s)

                                                
                                                
-- stdout --
	* [embed-certs-613000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	* Restarting existing qemu2 VM for "embed-certs-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:01.136330   23483 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:01.136450   23483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:01.136453   23483 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:01.136455   23483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:01.136615   23483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:01.137614   23483 out.go:298] Setting JSON to false
	I0318 05:16:01.153844   23483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11734,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:16:01.153906   23483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:16:01.157564   23483 out.go:177] * [embed-certs-613000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:16:01.164572   23483 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:16:01.164611   23483 notify.go:220] Checking for updates...
	I0318 05:16:01.168471   23483 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:16:01.172524   23483 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:16:01.175625   23483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:16:01.178527   23483 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:16:01.181569   23483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:16:01.184870   23483 config.go:182] Loaded profile config "embed-certs-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:01.185127   23483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:16:01.189533   23483 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:16:01.196619   23483 start.go:297] selected driver: qemu2
	I0318 05:16:01.196626   23483 start.go:901] validating driver "qemu2" against &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:01.196690   23483 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:16:01.199044   23483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:16:01.199092   23483 cni.go:84] Creating CNI manager for ""
	I0318 05:16:01.199099   23483 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:16:01.199126   23483 start.go:340] cluster config:
	{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-613000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:01.203525   23483 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:16:01.210603   23483 out.go:177] * Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	I0318 05:16:01.214392   23483 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:16:01.214413   23483 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:16:01.214425   23483 cache.go:56] Caching tarball of preloaded images
	I0318 05:16:01.214487   23483 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:16:01.214493   23483 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:16:01.214571   23483 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/embed-certs-613000/config.json ...
	I0318 05:16:01.215063   23483 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:01.215091   23483 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "embed-certs-613000"
	I0318 05:16:01.215104   23483 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:01.215110   23483 fix.go:54] fixHost starting: 
	I0318 05:16:01.215228   23483 fix.go:112] recreateIfNeeded on embed-certs-613000: state=Stopped err=<nil>
	W0318 05:16:01.215238   23483 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:01.223584   23483 out.go:177] * Restarting existing qemu2 VM for "embed-certs-613000" ...
	I0318 05:16:01.227531   23483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d4:c0:31:28:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:16:01.229626   23483 main.go:141] libmachine: STDOUT: 
	I0318 05:16:01.229644   23483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:01.229670   23483 fix.go:56] duration metric: took 14.561291ms for fixHost
	I0318 05:16:01.229674   23483 start.go:83] releasing machines lock for "embed-certs-613000", held for 14.579875ms
	W0318 05:16:01.229683   23483 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:01.229712   23483 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:01.229717   23483 start.go:728] Will try again in 5 seconds ...
	I0318 05:16:06.230758   23483 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:06.231105   23483 start.go:364] duration metric: took 255.292µs to acquireMachinesLock for "embed-certs-613000"
	I0318 05:16:06.231219   23483 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:06.231242   23483 fix.go:54] fixHost starting: 
	I0318 05:16:06.231944   23483 fix.go:112] recreateIfNeeded on embed-certs-613000: state=Stopped err=<nil>
	W0318 05:16:06.231969   23483 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:06.241342   23483 out.go:177] * Restarting existing qemu2 VM for "embed-certs-613000" ...
	I0318 05:16:06.244594   23483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d4:c0:31:28:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/embed-certs-613000/disk.qcow2
	I0318 05:16:06.254707   23483 main.go:141] libmachine: STDOUT: 
	I0318 05:16:06.254771   23483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:06.254849   23483 fix.go:56] duration metric: took 23.609875ms for fixHost
	I0318 05:16:06.254867   23483 start.go:83] releasing machines lock for "embed-certs-613000", held for 23.742083ms
	W0318 05:16:06.255103   23483 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:06.262464   23483 out.go:177] 
	W0318 05:16:06.265401   23483 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:06.265456   23483 out.go:239] * 
	* 
	W0318 05:16:06.268037   23483 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:16:06.276320   23483 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (66.77325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-613000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (33.572542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-613000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.449583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (31.498458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-613000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (31.2655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1: exit status 83 (42.303209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-613000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:06.554401   23512 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:06.554554   23512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:06.554557   23512 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:06.554559   23512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:06.554695   23512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:06.554901   23512 out.go:298] Setting JSON to false
	I0318 05:16:06.554909   23512 mustload.go:65] Loading cluster: embed-certs-613000
	I0318 05:16:06.555096   23512 config.go:182] Loaded profile config "embed-certs-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:06.559369   23512 out.go:177] * The control-plane node embed-certs-613000 host is not running: state=Stopped
	I0318 05:16:06.563165   23512 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-613000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (30.770667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (30.865083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.854018667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-092000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-092000" primary control-plane node in "default-k8s-diff-port-092000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-092000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:07.257926   23547 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:07.258087   23547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:07.258091   23547 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:07.258093   23547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:07.258240   23547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:07.259328   23547 out.go:298] Setting JSON to false
	I0318 05:16:07.275429   23547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11740,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:16:07.275499   23547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:16:07.280871   23547 out.go:177] * [default-k8s-diff-port-092000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:16:07.286892   23547 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:16:07.290831   23547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:16:07.286942   23547 notify.go:220] Checking for updates...
	I0318 05:16:07.293784   23547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:16:07.296835   23547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:16:07.299884   23547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:16:07.301293   23547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:16:07.305309   23547 config.go:182] Loaded profile config "cert-expiration-110000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:07.305378   23547 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:07.305423   23547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:16:07.309843   23547 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:16:07.314816   23547 start.go:297] selected driver: qemu2
	I0318 05:16:07.314822   23547 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:16:07.314828   23547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:16:07.317078   23547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 05:16:07.319834   23547 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:16:07.322969   23547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:16:07.323020   23547 cni.go:84] Creating CNI manager for ""
	I0318 05:16:07.323029   23547 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:16:07.323033   23547 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:16:07.323061   23547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-092000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:07.327572   23547 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:16:07.334839   23547 out.go:177] * Starting "default-k8s-diff-port-092000" primary control-plane node in "default-k8s-diff-port-092000" cluster
	I0318 05:16:07.338724   23547 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:16:07.338737   23547 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:16:07.338744   23547 cache.go:56] Caching tarball of preloaded images
	I0318 05:16:07.338790   23547 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:16:07.338795   23547 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:16:07.338861   23547 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/default-k8s-diff-port-092000/config.json ...
	I0318 05:16:07.338871   23547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/default-k8s-diff-port-092000/config.json: {Name:mk27d7065bdc9cb20f6ce993a910b6115674ee05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:16:07.339083   23547 start.go:360] acquireMachinesLock for default-k8s-diff-port-092000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:07.339114   23547 start.go:364] duration metric: took 25.291µs to acquireMachinesLock for "default-k8s-diff-port-092000"
	I0318 05:16:07.339129   23547 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:16:07.339157   23547 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:16:07.347847   23547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:16:07.365125   23547 start.go:159] libmachine.API.Create for "default-k8s-diff-port-092000" (driver="qemu2")
	I0318 05:16:07.365158   23547 client.go:168] LocalClient.Create starting
	I0318 05:16:07.365215   23547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:16:07.365251   23547 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:07.365262   23547 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:07.365309   23547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:16:07.365331   23547 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:07.365339   23547 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:07.365678   23547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:16:07.505915   23547 main.go:141] libmachine: Creating SSH key...
	I0318 05:16:07.653578   23547 main.go:141] libmachine: Creating Disk image...
	I0318 05:16:07.653587   23547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:16:07.653772   23547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:07.666179   23547 main.go:141] libmachine: STDOUT: 
	I0318 05:16:07.666198   23547 main.go:141] libmachine: STDERR: 
	I0318 05:16:07.666251   23547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2 +20000M
	I0318 05:16:07.676902   23547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:16:07.676915   23547 main.go:141] libmachine: STDERR: 
	I0318 05:16:07.676933   23547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:07.676939   23547 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:16:07.676965   23547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:cc:ed:fa:4a:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:07.678624   23547 main.go:141] libmachine: STDOUT: 
	I0318 05:16:07.678639   23547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:07.678659   23547 client.go:171] duration metric: took 313.505833ms to LocalClient.Create
	I0318 05:16:09.680808   23547 start.go:128] duration metric: took 2.341699333s to createHost
	I0318 05:16:09.680916   23547 start.go:83] releasing machines lock for "default-k8s-diff-port-092000", held for 2.341869708s
	W0318 05:16:09.681040   23547 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:09.696131   23547 out.go:177] * Deleting "default-k8s-diff-port-092000" in qemu2 ...
	W0318 05:16:09.722061   23547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:09.722098   23547 start.go:728] Will try again in 5 seconds ...
	I0318 05:16:14.724104   23547 start.go:360] acquireMachinesLock for default-k8s-diff-port-092000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:14.724544   23547 start.go:364] duration metric: took 341.25µs to acquireMachinesLock for "default-k8s-diff-port-092000"
	I0318 05:16:14.724697   23547 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:16:14.724968   23547 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:16:14.734580   23547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:16:14.783519   23547 start.go:159] libmachine.API.Create for "default-k8s-diff-port-092000" (driver="qemu2")
	I0318 05:16:14.783579   23547 client.go:168] LocalClient.Create starting
	I0318 05:16:14.783692   23547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:16:14.783759   23547 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:14.783774   23547 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:14.783839   23547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:16:14.783881   23547 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:14.783895   23547 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:14.785153   23547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:16:14.942947   23547 main.go:141] libmachine: Creating SSH key...
	I0318 05:16:15.010840   23547 main.go:141] libmachine: Creating Disk image...
	I0318 05:16:15.010849   23547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:16:15.011020   23547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:15.023179   23547 main.go:141] libmachine: STDOUT: 
	I0318 05:16:15.023198   23547 main.go:141] libmachine: STDERR: 
	I0318 05:16:15.023272   23547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2 +20000M
	I0318 05:16:15.034008   23547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:16:15.034022   23547 main.go:141] libmachine: STDERR: 
	I0318 05:16:15.034045   23547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:15.034049   23547 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:16:15.034081   23547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:49:5a:1f:99:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:15.035795   23547 main.go:141] libmachine: STDOUT: 
	I0318 05:16:15.035809   23547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:15.035836   23547 client.go:171] duration metric: took 252.247541ms to LocalClient.Create
	I0318 05:16:17.037939   23547 start.go:128] duration metric: took 2.313018542s to createHost
	I0318 05:16:17.038094   23547 start.go:83] releasing machines lock for "default-k8s-diff-port-092000", held for 2.313553625s
	W0318 05:16:17.038442   23547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:17.051273   23547 out.go:177] 
	W0318 05:16:17.055211   23547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:17.055263   23547 out.go:239] * 
	* 
	W0318 05:16:17.057904   23547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:16:17.067188   23547 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (67.184709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (10.01488875s)

                                                
                                                
-- stdout --
	* [newest-cni-461000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-461000" primary control-plane node in "newest-cni-461000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-461000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:10.190851   23564 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:10.190991   23564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:10.190995   23564 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:10.190998   23564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:10.191138   23564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:10.192246   23564 out.go:298] Setting JSON to false
	I0318 05:16:10.208195   23564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11743,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:16:10.208256   23564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:16:10.215331   23564 out.go:177] * [newest-cni-461000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:16:10.222310   23564 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:16:10.222347   23564 notify.go:220] Checking for updates...
	I0318 05:16:10.229341   23564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:16:10.232229   23564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:16:10.235316   23564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:16:10.243317   23564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:16:10.246297   23564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:16:10.249656   23564 config.go:182] Loaded profile config "default-k8s-diff-port-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:10.249727   23564 config.go:182] Loaded profile config "multinode-730000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:10.249780   23564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:16:10.254324   23564 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 05:16:10.261257   23564 start.go:297] selected driver: qemu2
	I0318 05:16:10.261264   23564 start.go:901] validating driver "qemu2" against <nil>
	I0318 05:16:10.261270   23564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:16:10.263639   23564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 05:16:10.263665   23564 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 05:16:10.272243   23564 out.go:177] * Automatically selected the socket_vmnet network
	I0318 05:16:10.275359   23564 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 05:16:10.275398   23564 cni.go:84] Creating CNI manager for ""
	I0318 05:16:10.275405   23564 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:16:10.275409   23564 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 05:16:10.275441   23564 start.go:340] cluster config:
	{Name:newest-cni-461000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:10.280431   23564 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:16:10.287277   23564 out.go:177] * Starting "newest-cni-461000" primary control-plane node in "newest-cni-461000" cluster
	I0318 05:16:10.291270   23564 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 05:16:10.291288   23564 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 05:16:10.291299   23564 cache.go:56] Caching tarball of preloaded images
	I0318 05:16:10.291366   23564 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:16:10.291373   23564 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 05:16:10.291446   23564 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/newest-cni-461000/config.json ...
	I0318 05:16:10.291458   23564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/newest-cni-461000/config.json: {Name:mkbdbaac1f10fc043725f348dca3712c08707685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 05:16:10.291683   23564 start.go:360] acquireMachinesLock for newest-cni-461000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:10.291716   23564 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "newest-cni-461000"
	I0318 05:16:10.291733   23564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:16:10.291763   23564 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:16:10.296297   23564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:16:10.314554   23564 start.go:159] libmachine.API.Create for "newest-cni-461000" (driver="qemu2")
	I0318 05:16:10.314580   23564 client.go:168] LocalClient.Create starting
	I0318 05:16:10.314640   23564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:16:10.314668   23564 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:10.314679   23564 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:10.314726   23564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:16:10.314749   23564 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:10.314756   23564 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:10.315124   23564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:16:10.454892   23564 main.go:141] libmachine: Creating SSH key...
	I0318 05:16:10.663399   23564 main.go:141] libmachine: Creating Disk image...
	I0318 05:16:10.663407   23564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:16:10.663601   23564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:10.676275   23564 main.go:141] libmachine: STDOUT: 
	I0318 05:16:10.676297   23564 main.go:141] libmachine: STDERR: 
	I0318 05:16:10.676356   23564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2 +20000M
	I0318 05:16:10.687181   23564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:16:10.687201   23564 main.go:141] libmachine: STDERR: 
	I0318 05:16:10.687214   23564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:10.687218   23564 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:16:10.687251   23564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d0:5e:0b:cf:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:10.688918   23564 main.go:141] libmachine: STDOUT: 
	I0318 05:16:10.688932   23564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:10.688950   23564 client.go:171] duration metric: took 374.378292ms to LocalClient.Create
	I0318 05:16:12.691134   23564 start.go:128] duration metric: took 2.399423958s to createHost
	I0318 05:16:12.691199   23564 start.go:83] releasing machines lock for "newest-cni-461000", held for 2.399552875s
	W0318 05:16:12.691250   23564 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:12.698490   23564 out.go:177] * Deleting "newest-cni-461000" in qemu2 ...
	W0318 05:16:12.729027   23564 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:12.729066   23564 start.go:728] Will try again in 5 seconds ...
	I0318 05:16:17.731122   23564 start.go:360] acquireMachinesLock for newest-cni-461000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:17.731476   23564 start.go:364] duration metric: took 260.958µs to acquireMachinesLock for "newest-cni-461000"
	I0318 05:16:17.731607   23564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 05:16:17.731849   23564 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 05:16:17.741470   23564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 05:16:17.793148   23564 start.go:159] libmachine.API.Create for "newest-cni-461000" (driver="qemu2")
	I0318 05:16:17.793205   23564 client.go:168] LocalClient.Create starting
	I0318 05:16:17.793310   23564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/ca.pem
	I0318 05:16:17.793363   23564 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:17.793381   23564 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:17.793448   23564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18427-19517/.minikube/certs/cert.pem
	I0318 05:16:17.793480   23564 main.go:141] libmachine: Decoding PEM data...
	I0318 05:16:17.793497   23564 main.go:141] libmachine: Parsing certificate...
	I0318 05:16:17.794108   23564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 05:16:17.944555   23564 main.go:141] libmachine: Creating SSH key...
	I0318 05:16:18.098207   23564 main.go:141] libmachine: Creating Disk image...
	I0318 05:16:18.098213   23564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 05:16:18.098404   23564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2.raw /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:18.111386   23564 main.go:141] libmachine: STDOUT: 
	I0318 05:16:18.111409   23564 main.go:141] libmachine: STDERR: 
	I0318 05:16:18.111475   23564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2 +20000M
	I0318 05:16:18.122244   23564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 05:16:18.122259   23564 main.go:141] libmachine: STDERR: 
	I0318 05:16:18.122269   23564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:18.122274   23564 main.go:141] libmachine: Starting QEMU VM...
	I0318 05:16:18.122310   23564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:85:7c:93:18:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:18.124013   23564 main.go:141] libmachine: STDOUT: 
	I0318 05:16:18.124030   23564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:18.124043   23564 client.go:171] duration metric: took 330.840375ms to LocalClient.Create
	I0318 05:16:20.126171   23564 start.go:128] duration metric: took 2.394372042s to createHost
	I0318 05:16:20.126297   23564 start.go:83] releasing machines lock for "newest-cni-461000", held for 2.394841s
	W0318 05:16:20.126739   23564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:20.142360   23564 out.go:177] 
	W0318 05:16:20.146617   23564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:20.146659   23564 out.go:239] * 
	* 
	W0318 05:16:20.149117   23564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:16:20.159341   23564 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (65.919542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-461000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-092000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-092000 create -f testdata/busybox.yaml: exit status 1 (29.182333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-092000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.617791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.121375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-092000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-092000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-092000 describe deploy/metrics-server -n kube-system: exit status 1 (27.308291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-092000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.309458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.187943667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-092000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-092000" primary control-plane node in "default-k8s-diff-port-092000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:20.723723   23626 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:20.723858   23626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:20.723861   23626 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:20.723867   23626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:20.723985   23626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:20.725007   23626 out.go:298] Setting JSON to false
	I0318 05:16:20.741045   23626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11753,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:16:20.741107   23626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:16:20.746089   23626 out.go:177] * [default-k8s-diff-port-092000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:16:20.752161   23626 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:16:20.755136   23626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:16:20.752222   23626 notify.go:220] Checking for updates...
	I0318 05:16:20.759119   23626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:16:20.762122   23626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:16:20.765045   23626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:16:20.768086   23626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:16:20.771406   23626 config.go:182] Loaded profile config "default-k8s-diff-port-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:20.771664   23626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:16:20.776027   23626 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:16:20.783100   23626 start.go:297] selected driver: qemu2
	I0318 05:16:20.783107   23626 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:20.783186   23626 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:16:20.785488   23626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 05:16:20.785528   23626 cni.go:84] Creating CNI manager for ""
	I0318 05:16:20.785536   23626 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:16:20.785567   23626 start.go:340] cluster config:
	{Name:default-k8s-diff-port-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-092000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:20.789970   23626 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:16:20.797066   23626 out.go:177] * Starting "default-k8s-diff-port-092000" primary control-plane node in "default-k8s-diff-port-092000" cluster
	I0318 05:16:20.801028   23626 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 05:16:20.801044   23626 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 05:16:20.801055   23626 cache.go:56] Caching tarball of preloaded images
	I0318 05:16:20.801110   23626 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:16:20.801124   23626 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 05:16:20.801193   23626 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/default-k8s-diff-port-092000/config.json ...
	I0318 05:16:20.801696   23626 start.go:360] acquireMachinesLock for default-k8s-diff-port-092000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:20.801724   23626 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "default-k8s-diff-port-092000"
	I0318 05:16:20.801733   23626 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:20.801740   23626 fix.go:54] fixHost starting: 
	I0318 05:16:20.801869   23626 fix.go:112] recreateIfNeeded on default-k8s-diff-port-092000: state=Stopped err=<nil>
	W0318 05:16:20.801878   23626 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:20.806078   23626 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-092000" ...
	I0318 05:16:20.814066   23626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:49:5a:1f:99:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:20.816060   23626 main.go:141] libmachine: STDOUT: 
	I0318 05:16:20.816082   23626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:20.816114   23626 fix.go:56] duration metric: took 14.374458ms for fixHost
	I0318 05:16:20.816120   23626 start.go:83] releasing machines lock for "default-k8s-diff-port-092000", held for 14.3925ms
	W0318 05:16:20.816127   23626 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:20.816160   23626 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:20.816165   23626 start.go:728] Will try again in 5 seconds ...
	I0318 05:16:25.818204   23626 start.go:360] acquireMachinesLock for default-k8s-diff-port-092000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:25.818667   23626 start.go:364] duration metric: took 318µs to acquireMachinesLock for "default-k8s-diff-port-092000"
	I0318 05:16:25.818793   23626 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:25.818812   23626 fix.go:54] fixHost starting: 
	I0318 05:16:25.819490   23626 fix.go:112] recreateIfNeeded on default-k8s-diff-port-092000: state=Stopped err=<nil>
	W0318 05:16:25.819514   23626 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:25.828766   23626 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-092000" ...
	I0318 05:16:25.833042   23626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:49:5a:1f:99:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/default-k8s-diff-port-092000/disk.qcow2
	I0318 05:16:25.843252   23626 main.go:141] libmachine: STDOUT: 
	I0318 05:16:25.843336   23626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:25.843425   23626 fix.go:56] duration metric: took 24.610792ms for fixHost
	I0318 05:16:25.843446   23626 start.go:83] releasing machines lock for "default-k8s-diff-port-092000", held for 24.757333ms
	W0318 05:16:25.843652   23626 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:25.852824   23626 out.go:177] 
	W0318 05:16:25.856741   23626 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:25.856770   23626 out.go:239] * 
	* 
	W0318 05:16:25.859404   23626 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:16:25.867817   23626 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-092000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (67.865666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.18831225s)

                                                
                                                
-- stdout --
	* [newest-cni-461000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-461000" primary control-plane node in "newest-cni-461000" cluster
	* Restarting existing qemu2 VM for "newest-cni-461000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-461000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:23.811706   23649 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:23.811830   23649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:23.811834   23649 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:23.811836   23649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:23.811980   23649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:23.812951   23649 out.go:298] Setting JSON to false
	I0318 05:16:23.829159   23649 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":11756,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 05:16:23.829226   23649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 05:16:23.833103   23649 out.go:177] * [newest-cni-461000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 05:16:23.840241   23649 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 05:16:23.843221   23649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 05:16:23.840302   23649 notify.go:220] Checking for updates...
	I0318 05:16:23.847269   23649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 05:16:23.850240   23649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 05:16:23.853237   23649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 05:16:23.856189   23649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 05:16:23.859556   23649 config.go:182] Loaded profile config "newest-cni-461000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 05:16:23.859822   23649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 05:16:23.864181   23649 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 05:16:23.871233   23649 start.go:297] selected driver: qemu2
	I0318 05:16:23.871240   23649 start.go:901] validating driver "qemu2" against &{Name:newest-cni-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:23.871320   23649 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 05:16:23.873567   23649 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 05:16:23.873617   23649 cni.go:84] Creating CNI manager for ""
	I0318 05:16:23.873626   23649 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 05:16:23.873649   23649 start.go:340] cluster config:
	{Name:newest-cni-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-461000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 05:16:23.877964   23649 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 05:16:23.886259   23649 out.go:177] * Starting "newest-cni-461000" primary control-plane node in "newest-cni-461000" cluster
	I0318 05:16:23.890206   23649 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 05:16:23.890221   23649 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 05:16:23.890232   23649 cache.go:56] Caching tarball of preloaded images
	I0318 05:16:23.890292   23649 preload.go:173] Found /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 05:16:23.890297   23649 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 05:16:23.890348   23649 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/newest-cni-461000/config.json ...
	I0318 05:16:23.890821   23649 start.go:360] acquireMachinesLock for newest-cni-461000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:23.890848   23649 start.go:364] duration metric: took 21.167µs to acquireMachinesLock for "newest-cni-461000"
	I0318 05:16:23.890859   23649 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:23.890865   23649 fix.go:54] fixHost starting: 
	I0318 05:16:23.890992   23649 fix.go:112] recreateIfNeeded on newest-cni-461000: state=Stopped err=<nil>
	W0318 05:16:23.891001   23649 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:23.895229   23649 out.go:177] * Restarting existing qemu2 VM for "newest-cni-461000" ...
	I0318 05:16:23.903183   23649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:85:7c:93:18:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:23.905134   23649 main.go:141] libmachine: STDOUT: 
	I0318 05:16:23.905153   23649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:23.905183   23649 fix.go:56] duration metric: took 14.317833ms for fixHost
	I0318 05:16:23.905189   23649 start.go:83] releasing machines lock for "newest-cni-461000", held for 14.335459ms
	W0318 05:16:23.905196   23649 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:23.905229   23649 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:23.905234   23649 start.go:728] Will try again in 5 seconds ...
	I0318 05:16:28.907378   23649 start.go:360] acquireMachinesLock for newest-cni-461000: {Name:mkdf6f3d9b93e7b8aa9e3d0e7b0c42f1219c0019 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 05:16:28.907782   23649 start.go:364] duration metric: took 323.333µs to acquireMachinesLock for "newest-cni-461000"
	I0318 05:16:28.907913   23649 start.go:96] Skipping create...Using existing machine configuration
	I0318 05:16:28.907935   23649 fix.go:54] fixHost starting: 
	I0318 05:16:28.908674   23649 fix.go:112] recreateIfNeeded on newest-cni-461000: state=Stopped err=<nil>
	W0318 05:16:28.908703   23649 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 05:16:28.916027   23649 out.go:177] * Restarting existing qemu2 VM for "newest-cni-461000" ...
	I0318 05:16:28.921276   23649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:85:7c:93:18:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18427-19517/.minikube/machines/newest-cni-461000/disk.qcow2
	I0318 05:16:28.932049   23649 main.go:141] libmachine: STDOUT: 
	I0318 05:16:28.932117   23649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 05:16:28.932193   23649 fix.go:56] duration metric: took 24.262458ms for fixHost
	I0318 05:16:28.932207   23649 start.go:83] releasing machines lock for "newest-cni-461000", held for 24.404667ms
	W0318 05:16:28.932356   23649 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-461000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-461000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 05:16:28.940948   23649 out.go:177] 
	W0318 05:16:28.944102   23649 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 05:16:28.944139   23649 out.go:239] * 
	* 
	W0318 05:16:28.946905   23649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 05:16:28.959906   23649 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-461000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (69.880667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-461000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-092000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (33.390709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-092000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.287791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.09775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-092000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.052334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-092000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-092000 --alsologtostderr -v=1: exit status 83 (42.482458ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-092000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-092000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:26.146148   23668 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:26.146282   23668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:26.146286   23668 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:26.146288   23668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:26.146410   23668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:26.146625   23668 out.go:298] Setting JSON to false
	I0318 05:16:26.146634   23668 mustload.go:65] Loading cluster: default-k8s-diff-port-092000
	I0318 05:16:26.146827   23668 config.go:182] Loaded profile config "default-k8s-diff-port-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 05:16:26.150178   23668 out.go:177] * The control-plane node default-k8s-diff-port-092000 host is not running: state=Stopped
	I0318 05:16:26.154070   23668 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-092000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-092000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (31.047958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (30.989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-461000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (32.198667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-461000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-461000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-461000 --alsologtostderr -v=1: exit status 83 (42.297375ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-461000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-461000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 05:16:29.147897   23698 out.go:291] Setting OutFile to fd 1 ...
	I0318 05:16:29.148049   23698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:29.148052   23698 out.go:304] Setting ErrFile to fd 2...
	I0318 05:16:29.148055   23698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 05:16:29.148166   23698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 05:16:29.148386   23698 out.go:298] Setting JSON to false
	I0318 05:16:29.148395   23698 mustload.go:65] Loading cluster: newest-cni-461000
	I0318 05:16:29.148585   23698 config.go:182] Loaded profile config "newest-cni-461000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 05:16:29.152038   23698 out.go:177] * The control-plane node newest-cni-461000 host is not running: state=Stopped
	I0318 05:16:29.155882   23698 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-461000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-461000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (32.117875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-461000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (32.117667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-461000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 31.31
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.29.0-rc.2/json-events 30.19
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.07
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.31
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.08
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.3
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.43
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.45
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.21
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 5
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
267 TestNoKubernetes/serial/ProfileList 0.15
268 TestNoKubernetes/serial/Stop 2.04
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.06
284 TestStartStop/group/old-k8s-version/serial/Stop 3.28
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
295 TestStartStop/group/no-preload/serial/Stop 3.83
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 3.73
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.22
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 3.35
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-305000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-305000: exit status 85 (106.4295ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |          |
	|         | -p download-only-305000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:48:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:48:01.115369   19928 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:48:01.115504   19928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:01.115507   19928 out.go:304] Setting ErrFile to fd 2...
	I0318 04:48:01.115510   19928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:01.115641   19928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	W0318 04:48:01.115725   19928 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18427-19517/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18427-19517/.minikube/config/config.json: no such file or directory
	I0318 04:48:01.116985   19928 out.go:298] Setting JSON to true
	I0318 04:48:01.134790   19928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10054,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:48:01.134855   19928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:48:01.139982   19928 out.go:97] [download-only-305000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:48:01.142858   19928 out.go:169] MINIKUBE_LOCATION=18427
	I0318 04:48:01.140141   19928 notify.go:220] Checking for updates...
	W0318 04:48:01.140201   19928 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 04:48:01.151865   19928 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:48:01.155873   19928 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:48:01.159886   19928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:48:01.162982   19928 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	W0318 04:48:01.168915   19928 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:48:01.169145   19928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:48:01.171883   19928 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:48:01.171903   19928 start.go:297] selected driver: qemu2
	I0318 04:48:01.171918   19928 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:48:01.172024   19928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:48:01.174827   19928 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:48:01.181141   19928 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:48:01.181250   19928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:48:01.181348   19928 cni.go:84] Creating CNI manager for ""
	I0318 04:48:01.181368   19928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:48:01.181417   19928 start.go:340] cluster config:
	{Name:download-only-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:48:01.186215   19928 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:48:01.190753   19928 out.go:97] Downloading VM boot image ...
	I0318 04:48:01.190771   19928 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 04:48:19.138290   19928 out.go:97] Starting "download-only-305000" primary control-plane node in "download-only-305000" cluster
	I0318 04:48:19.138315   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:19.458724   19928 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:48:19.458803   19928 cache.go:56] Caching tarball of preloaded images
	I0318 04:48:19.460492   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:19.466322   19928 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 04:48:19.466346   19928 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:20.068677   19928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:48:39.000995   19928 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:39.001150   19928 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:39.698964   19928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:48:39.699162   19928 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-305000/config.json ...
	I0318 04:48:39.699180   19928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-305000/config.json: {Name:mka42895365f71bc1505c7c59e512495f624655a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:48:39.699391   19928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:48:39.699580   19928 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 04:48:40.712926   19928 out.go:169] 
	W0318 04:48:40.717963   19928 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520 0x1089cb520] Decompressors:map[bz2:0x140007daa28 gz:0x140007daab0 tar:0x140007daa60 tar.bz2:0x140007daa70 tar.gz:0x140007daa80 tar.xz:0x140007daa90 tar.zst:0x140007daaa0 tbz2:0x140007daa70 tgz:0x140007daa80 txz:0x140007daa90 tzst:0x140007daaa0 xz:0x140007daab8 zip:0x140007daac0 zst:0x140007daad0] Getters:map[file:0x140006c8c70 http:0x14000568230 https:0x14000568280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 04:48:40.717989   19928 out_reason.go:110] 
	W0318 04:48:40.726764   19928 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:48:40.730916   19928 out.go:169] 
	
	
	* The control-plane node download-only-305000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-305000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-305000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (31.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-573000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-573000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (31.312116917s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (31.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-573000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-573000: exit status 85 (84.543ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-305000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| delete  | -p download-only-305000        | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| start   | -o=json --download-only        | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-573000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:48:41
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:48:41.417051   19968 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:48:41.417188   19968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:41.417192   19968 out.go:304] Setting ErrFile to fd 2...
	I0318 04:48:41.417194   19968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:48:41.417362   19968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:48:41.418413   19968 out.go:298] Setting JSON to true
	I0318 04:48:41.434388   19968 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10094,"bootTime":1710752427,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:48:41.434464   19968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:48:41.438527   19968 out.go:97] [download-only-573000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:48:41.442506   19968 out.go:169] MINIKUBE_LOCATION=18427
	I0318 04:48:41.438613   19968 notify.go:220] Checking for updates...
	I0318 04:48:41.449506   19968 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:48:41.452503   19968 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:48:41.459505   19968 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:48:41.466547   19968 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	W0318 04:48:41.473512   19968 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:48:41.473675   19968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:48:41.476488   19968 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:48:41.476496   19968 start.go:297] selected driver: qemu2
	I0318 04:48:41.476499   19968 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:48:41.476533   19968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:48:41.480486   19968 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:48:41.485811   19968 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:48:41.485898   19968 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:48:41.485945   19968 cni.go:84] Creating CNI manager for ""
	I0318 04:48:41.485954   19968 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:48:41.485958   19968 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:48:41.486006   19968 start.go:340] cluster config:
	{Name:download-only-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:48:41.490515   19968 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:48:41.493605   19968 out.go:97] Starting "download-only-573000" primary control-plane node in "download-only-573000" cluster
	I0318 04:48:41.493614   19968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:48:42.598099   19968 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:48:42.598184   19968 cache.go:56] Caching tarball of preloaded images
	I0318 04:48:42.599869   19968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:48:42.603826   19968 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 04:48:42.603875   19968 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:43.211388   19968 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:48:59.470983   19968 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:48:59.471139   19968 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:49:00.052877   19968 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:49:00.053064   19968 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-573000/config.json ...
	I0318 04:49:00.053079   19968 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-573000/config.json: {Name:mk2470a61b38f364a40700f1f659ce1837438ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:49:00.054123   19968 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:49:00.054241   19968 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-573000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-573000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-573000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (30.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-945000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-945000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (30.185646459s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (30.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-945000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-945000: exit status 85 (78.1415ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-305000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| delete  | -p download-only-305000           | download-only-305000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT | 18 Mar 24 04:48 PDT |
	| start   | -o=json --download-only           | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:48 PDT |                     |
	|         | -p download-only-573000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| delete  | -p download-only-573000           | download-only-573000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT | 18 Mar 24 04:49 PDT |
	| start   | -o=json --download-only           | download-only-945000 | jenkins | v1.32.0 | 18 Mar 24 04:49 PDT |                     |
	|         | -p download-only-945000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:49:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:49:13.270499   20003 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:49:13.270629   20003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:49:13.270633   20003 out.go:304] Setting ErrFile to fd 2...
	I0318 04:49:13.270635   20003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:49:13.270770   20003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:49:13.271897   20003 out.go:298] Setting JSON to true
	I0318 04:49:13.287895   20003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10126,"bootTime":1710752427,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:49:13.287960   20003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:49:13.292975   20003 out.go:97] [download-only-945000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:49:13.296893   20003 out.go:169] MINIKUBE_LOCATION=18427
	I0318 04:49:13.293074   20003 notify.go:220] Checking for updates...
	I0318 04:49:13.303910   20003 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:49:13.306938   20003 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:49:13.309948   20003 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:49:13.312936   20003 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	W0318 04:49:13.318824   20003 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:49:13.319015   20003 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:49:13.321854   20003 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:49:13.321862   20003 start.go:297] selected driver: qemu2
	I0318 04:49:13.321866   20003 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:49:13.321923   20003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:49:13.324891   20003 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:49:13.330051   20003 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:49:13.330141   20003 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:49:13.330184   20003 cni.go:84] Creating CNI manager for ""
	I0318 04:49:13.330194   20003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:49:13.330205   20003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:49:13.330253   20003 start.go:340] cluster config:
	{Name:download-only-945000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-945000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:49:13.334717   20003 iso.go:125] acquiring lock: {Name:mk605f169536f8f2c78a5b8e24ec790c2ceaf5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:49:13.337941   20003 out.go:97] Starting "download-only-945000" primary control-plane node in "download-only-945000" cluster
	I0318 04:49:13.337952   20003 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:49:13.999731   20003 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:49:13.999795   20003 cache.go:56] Caching tarball of preloaded images
	I0318 04:49:14.000543   20003 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:49:14.005918   20003 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 04:49:14.005956   20003 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:49:14.596190   20003 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:49:30.064107   20003 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:49:30.064265   20003 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:49:30.618846   20003 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:49:30.619039   20003 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-945000/config.json ...
	I0318 04:49:30.619059   20003 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18427-19517/.minikube/profiles/download-only-945000/config.json: {Name:mk0a8b893ce88b9e785a32e2b29d3006df07a1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:49:30.619315   20003 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:49:30.619429   20003 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18427-19517/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-945000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-945000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-945000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-892000 --alsologtostderr --binary-mirror http://127.0.0.1:54091 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-892000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-009000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-009000: exit status 85 (59.698666ms)

                                                
                                                
-- stdout --
	* Profile "addons-009000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-009000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-009000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-009000: exit status 85 (63.438125ms)

                                                
                                                
-- stdout --
	* Profile "addons-009000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-009000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status: exit status 7 (32.7355ms)

                                                
                                                
-- stdout --
	nospam-701000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status: exit status 7 (31.692208ms)

                                                
                                                
-- stdout --
	nospam-701000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status: exit status 7 (31.215958ms)

                                                
                                                
-- stdout --
	nospam-701000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause: exit status 83 (40.1335ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause: exit status 83 (42.195333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause: exit status 83 (41.926459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause: exit status 83 (39.793292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause: exit status 83 (40.870875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause: exit status 83 (40.629584ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-701000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop: (1.873568792s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop: (3.460146333s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-701000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-701000 stop: (2.975281291s)
--- PASS: TestErrorSpam/stop (8.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18427-19517/.minikube/files/etc/test/nested/copy/19926/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:3.1: (2.132482209s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:3.3: (2.163583583s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-681000 cache add registry.k8s.io/pause:latest: (1.784742625s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3785891817/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache add minikube-local-cache-test:functional-681000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 cache delete minikube-local-cache-test:functional-681000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-681000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 config get cpus: exit status 14 (31.30975ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 config get cpus: exit status 14 (38.1675ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-681000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (178.760958ms)

                                                
                                                
-- stdout --
	* [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:51:37.384628   20638 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:51:37.384826   20638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:37.384831   20638 out.go:304] Setting ErrFile to fd 2...
	I0318 04:51:37.384834   20638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:37.385030   20638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:51:37.386357   20638 out.go:298] Setting JSON to false
	I0318 04:51:37.406107   20638 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10270,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:51:37.406177   20638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:51:37.411295   20638 out.go:177] * [functional-681000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:51:37.423226   20638 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:51:37.418330   20638 notify.go:220] Checking for updates...
	I0318 04:51:37.431213   20638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:51:37.440247   20638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:51:37.443292   20638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:51:37.446243   20638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:51:37.449326   20638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:51:37.452617   20638 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:51:37.452925   20638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:51:37.456217   20638 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:51:37.463242   20638 start.go:297] selected driver: qemu2
	I0318 04:51:37.463249   20638 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:51:37.463320   20638 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:51:37.470203   20638 out.go:177] 
	W0318 04:51:37.474422   20638 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 04:51:37.478290   20638 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-681000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-681000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.117958ms)

                                                
                                                
-- stdout --
	* [functional-681000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:51:37.635703   20649 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:51:37.635807   20649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:37.635810   20649 out.go:304] Setting ErrFile to fd 2...
	I0318 04:51:37.635812   20649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:51:37.635942   20649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18427-19517/.minikube/bin
	I0318 04:51:37.637413   20649 out.go:298] Setting JSON to false
	I0318 04:51:37.654175   20649 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":10270,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:51:37.654249   20649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:51:37.659315   20649 out.go:177] * [functional-681000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0318 04:51:37.666297   20649 out.go:177]   - MINIKUBE_LOCATION=18427
	I0318 04:51:37.670260   20649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	I0318 04:51:37.666360   20649 notify.go:220] Checking for updates...
	I0318 04:51:37.674223   20649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:51:37.677237   20649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:51:37.680275   20649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	I0318 04:51:37.683334   20649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:51:37.686585   20649 config.go:182] Loaded profile config "functional-681000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:51:37.686895   20649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:51:37.691247   20649 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0318 04:51:37.698277   20649 start.go:297] selected driver: qemu2
	I0318 04:51:37.698284   20649 start.go:901] validating driver "qemu2" against &{Name:functional-681000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-681000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:51:37.698346   20649 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:51:37.705257   20649 out.go:177] 
	W0318 04:51:37.709243   20649 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 04:51:37.713252   20649 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.427015083s)
--- PASS: TestFunctional/parallel/License (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.411361959s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-681000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image rm gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-681000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 image save --daemon gcr.io/google-containers/addon-resizer:functional-681000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-681000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "75.016709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.916125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "73.012ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.805792ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01088825s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-681000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-681000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-681000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-681000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-370000 --output=json --user=testUser: (3.205699708s)
--- PASS: TestJSONOutput/stop/Command (3.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-041000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-041000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.954833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de907d5f-e1b2-4c94-a5e4-39891dcb5a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-041000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ee998b3-be92-4fc7-9679-fb2e46340e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18427"}}
	{"specversion":"1.0","id":"e994d2d4-e868-4846-b0a8-c4999f0a316d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig"}}
	{"specversion":"1.0","id":"d9bff9a6-ce32-468f-9993-effc89e46c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d4b8e0f0-ea70-42a8-a49e-441a5ec32c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"168ad889-5047-4166-a805-7deb7f67020a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube"}}
	{"specversion":"1.0","id":"abc50afd-c366-4b87-9021-73383eb34185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d51d56cc-15f3-4f54-b5ca-3938a24fa943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-041000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-041000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-211000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-277000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.886458ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18427
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18427-19517/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18427-19517/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-277000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-277000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.759209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-277000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-277000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-277000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-277000: (2.037657291s)
--- PASS: TestNoKubernetes/serial/Stop (2.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-277000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-277000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (61.600417ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-277000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-277000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-431000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-431000 --alsologtostderr -v=3: (3.276976792s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-431000 -n old-k8s-version-431000: exit status 7 (62.217166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-431000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-051000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-051000 --alsologtostderr -v=3: (3.826237s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-051000 -n no-preload-051000: exit status 7 (58.47975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-051000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-613000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-613000 --alsologtostderr -v=3: (3.731634208s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (60.093125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-613000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-092000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-092000 --alsologtostderr -v=3: (3.216165375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-461000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-461000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-461000 --alsologtostderr -v=3: (3.351630625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-092000 -n default-k8s-diff-port-092000: exit status 7 (55.217792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-092000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-461000 -n newest-cni-461000: exit status 7 (57.264083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-461000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2184400019/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710762660459427000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2184400019/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710762660459427000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2184400019/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710762660459427000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2184400019/001/test-1710762660459427000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.402041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.964458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.035834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.473583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (96.184083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.814625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.908791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo umount -f /mount-9p": exit status 83 (44.79025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2184400019/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1130894676/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.867875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.377417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.416042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.959416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.053958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.43075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.796541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "sudo umount -f /mount-9p": exit status 83 (48.803208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-681000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1130894676/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (88.704791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (85.899333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (87.613333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (88.163083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (88.906292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (86.6285ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (86.87625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-681000 ssh "findmnt -T" /mount1: exit status 83 (87.696333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-681000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-681000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-681000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2496552778/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.43s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-970000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-970000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-970000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-970000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-970000"

                                                
                                                
----------------------- debugLogs end: cilium-970000 [took: 2.266243666s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-970000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-970000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-603000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard