Test Report: QEMU_macOS 18431

                    
                      80dc9090142297b85dde7abc1e10c47a59582e12:2024-03-18:33628
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.34
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.09
36 TestAddons/Setup 10.37
37 TestCertOptions 10.06
38 TestCertExpiration 195.34
39 TestDockerFlags 10.05
40 TestForceSystemdFlag 10.08
41 TestForceSystemdEnv 10.54
47 TestErrorSpam/setup 9.76
56 TestFunctional/serial/StartWithProxy 9.86
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.17
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.04
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.3
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 98.81
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.05
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.51
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.05
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.68
150 TestMultiControlPlane/serial/StartCluster 9.96
151 TestMultiControlPlane/serial/DeployApp 114.45
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 37.61
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.67
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 2.12
165 TestMultiControlPlane/serial/RestartCluster 5.27
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.89
174 TestJSONOutput/start/Command 9.72
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.25
206 TestMountStart/serial/StartWithMountFirst 10.62
209 TestMultiNode/serial/FreshStart2Nodes 9.93
210 TestMultiNode/serial/DeployApp2Nodes 104.48
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 42.2
218 TestMultiNode/serial/RestartKeepsNodes 8.44
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 2.25
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.3
226 TestPreload 10.09
228 TestScheduledStopUnix 10.25
229 TestSkaffold 16.55
232 TestRunningBinaryUpgrade 625.57
234 TestKubernetesUpgrade 19.08
248 TestStoppedBinaryUpgrade/Upgrade 636.84
257 TestPause/serial/Start 10.14
261 TestNoKubernetes/serial/StartWithK8s 10.04
262 TestNoKubernetes/serial/StartWithStopK8s 5.92
263 TestNoKubernetes/serial/Start 5.87
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 4.19
268 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.54
269 TestNoKubernetes/serial/StartNoArgs 5.97
271 TestNetworkPlugins/group/auto/Start 9.89
272 TestNetworkPlugins/group/kindnet/Start 9.84
273 TestNetworkPlugins/group/calico/Start 9.79
274 TestNetworkPlugins/group/custom-flannel/Start 9.79
275 TestNetworkPlugins/group/false/Start 9.79
276 TestNetworkPlugins/group/enable-default-cni/Start 9.98
277 TestNetworkPlugins/group/flannel/Start 9.83
278 TestNetworkPlugins/group/bridge/Start 9.82
279 TestNetworkPlugins/group/kubenet/Start 9.86
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.11
292 TestStartStop/group/no-preload/serial/FirstStart 9.85
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.26
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.11
303 TestStartStop/group/embed-certs/serial/FirstStart 9.96
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
308 TestStartStop/group/embed-certs/serial/SecondStart 5.27
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
316 TestStartStop/group/newest-cni/serial/FirstStart 9.84
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/newest-cni/serial/SecondStart 5.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (39.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-983000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-983000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.34091325s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dedc9f9d-489c-4420-bd33-8e555ec81365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-983000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd59825f-1cb5-4c57-84e0-8c16c02ef4e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18431"}}
	{"specversion":"1.0","id":"c10138c2-159f-41c5-9405-990bd3295510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig"}}
	{"specversion":"1.0","id":"c4bbc30c-ce26-414b-b224-ad688048707a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e73d427a-8e7c-4ef4-83f6-ff6fd221778a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c983a48-7f24-414b-9a5a-8346c0e86620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube"}}
	{"specversion":"1.0","id":"665395e4-7b89-476c-a415-ac4d9e3a2f12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"43cf121a-b9d0-4b62-bcc1-2beeeccea7ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a1fc2a9-74a4-4aff-bb22-118e55a37322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6b82dec9-2cec-4420-9fbb-209407b7d1c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"70f90745-2218-409e-b367-fb5eaf908d0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-983000\" primary control-plane node in \"download-only-983000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5187c802-92c2-4321-a6d7-25b80212637e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e18ee3a6-c943-4ef3-88ed-60a5d37976e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520] Decompressors:map[bz2:0x14000651510 gz:0x14000651518 tar:0x14000651360 tar.bz2:0x14000651370 tar.gz:0x14000651380 tar.xz:0x140006513a0 tar.zst:0x140006513b0 tbz2:0x14000651370 tgz:0x1
4000651380 txz:0x140006513a0 tzst:0x140006513b0 xz:0x14000651520 zip:0x14000651530 zst:0x14000651528] Getters:map[file:0x140024a4770 http:0x140006dc230 https:0x140006dc280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"011c3fc9-033e-4270-a45c-9b602b559517","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:49:29.696575   11037 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:49:29.696737   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:49:29.696740   11037 out.go:304] Setting ErrFile to fd 2...
	I0318 03:49:29.696743   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:49:29.696863   11037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	W0318 03:49:29.696955   11037 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18431-10628/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18431-10628/.minikube/config/config.json: no such file or directory
	I0318 03:49:29.698269   11037 out.go:298] Setting JSON to true
	I0318 03:49:29.716024   11037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6542,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:49:29.716087   11037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:49:29.721243   11037 out.go:97] [download-only-983000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:49:29.726257   11037 out.go:169] MINIKUBE_LOCATION=18431
	I0318 03:49:29.721383   11037 notify.go:220] Checking for updates...
	W0318 03:49:29.721415   11037 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 03:49:29.733185   11037 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:49:29.736188   11037 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:49:29.739256   11037 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:49:29.742205   11037 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	W0318 03:49:29.748197   11037 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 03:49:29.748429   11037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:49:29.749903   11037 out.go:97] Using the qemu2 driver based on user configuration
	I0318 03:49:29.749921   11037 start.go:297] selected driver: qemu2
	I0318 03:49:29.749926   11037 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:49:29.749985   11037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:49:29.753219   11037 out.go:169] Automatically selected the socket_vmnet network
	I0318 03:49:29.758451   11037 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 03:49:29.758548   11037 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 03:49:29.758634   11037 cni.go:84] Creating CNI manager for ""
	I0318 03:49:29.758651   11037 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 03:49:29.758710   11037 start.go:340] cluster config:
	{Name:download-only-983000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:49:29.763357   11037 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:49:29.766158   11037 out.go:97] Downloading VM boot image ...
	I0318 03:49:29.766171   11037 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 03:49:47.597446   11037 out.go:97] Starting "download-only-983000" primary control-plane node in "download-only-983000" cluster
	I0318 03:49:47.597470   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:49:47.880699   11037 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 03:49:47.880837   11037 cache.go:56] Caching tarball of preloaded images
	I0318 03:49:47.881533   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:49:47.887052   11037 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 03:49:47.887082   11037 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:49:48.473563   11037 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 03:50:07.641124   11037 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:07.641310   11037 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:08.342042   11037 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 03:50:08.342246   11037 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-983000/config.json ...
	I0318 03:50:08.342264   11037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-983000/config.json: {Name:mk2f92b12e91f46ec3ec4150e042456020d39c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:50:08.343689   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:50:08.344054   11037 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 03:50:08.957433   11037 out.go:169] 
	W0318 03:50:08.962540   11037 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520] Decompressors:map[bz2:0x14000651510 gz:0x14000651518 tar:0x14000651360 tar.bz2:0x14000651370 tar.gz:0x14000651380 tar.xz:0x140006513a0 tar.zst:0x140006513b0 tbz2:0x14000651370 tgz:0x14000651380 txz:0x140006513a0 tzst:0x140006513b0 xz:0x14000651520 zip:0x14000651530 zst:0x14000651528] Getters:map[file:0x140024a4770 http:0x140006dc230 https:0x140006dc280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 03:50:08.962564   11037 out_reason.go:110] 
	W0318 03:50:08.969457   11037 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:50:08.973457   11037 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-983000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-654000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-654000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.902257625s)

                                                
                                                
-- stdout --
	* [offline-docker-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-654000" primary control-plane node in "offline-docker-654000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-654000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:02:17.566510   12598 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:02:17.566643   12598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:17.566646   12598 out.go:304] Setting ErrFile to fd 2...
	I0318 04:02:17.566649   12598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:17.566783   12598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:02:17.567972   12598 out.go:298] Setting JSON to false
	I0318 04:02:17.585518   12598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7310,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:02:17.585615   12598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:02:17.590499   12598 out.go:177] * [offline-docker-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:02:17.598641   12598 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:02:17.598700   12598 notify.go:220] Checking for updates...
	I0318 04:02:17.604538   12598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:02:17.607548   12598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:02:17.610483   12598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:02:17.613530   12598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:02:17.616602   12598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:02:17.619890   12598 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:02:17.619948   12598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:02:17.623533   12598 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:02:17.630476   12598 start.go:297] selected driver: qemu2
	I0318 04:02:17.630485   12598 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:02:17.630497   12598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:02:17.632611   12598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:02:17.635508   12598 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:02:17.638649   12598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:02:17.638679   12598 cni.go:84] Creating CNI manager for ""
	I0318 04:02:17.638686   12598 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:02:17.638690   12598 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:02:17.638722   12598 start.go:340] cluster config:
	{Name:offline-docker-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:02:17.643118   12598 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:02:17.651505   12598 out.go:177] * Starting "offline-docker-654000" primary control-plane node in "offline-docker-654000" cluster
	I0318 04:02:17.655399   12598 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:02:17.655434   12598 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:02:17.655445   12598 cache.go:56] Caching tarball of preloaded images
	I0318 04:02:17.655530   12598 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:02:17.655537   12598 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:02:17.655597   12598 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/offline-docker-654000/config.json ...
	I0318 04:02:17.655608   12598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/offline-docker-654000/config.json: {Name:mk31c279c923b05ba2e9bf26f337e94aceb415d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:02:17.655893   12598 start.go:360] acquireMachinesLock for offline-docker-654000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:17.655924   12598 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "offline-docker-654000"
	I0318 04:02:17.655936   12598 start.go:93] Provisioning new machine with config: &{Name:offline-docker-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:02:17.655976   12598 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:02:17.660537   12598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:02:17.675815   12598 start.go:159] libmachine.API.Create for "offline-docker-654000" (driver="qemu2")
	I0318 04:02:17.675845   12598 client.go:168] LocalClient.Create starting
	I0318 04:02:17.675913   12598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:02:17.675942   12598 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:17.675952   12598 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:17.676006   12598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:02:17.676027   12598 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:17.676034   12598 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:17.676408   12598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:02:17.816591   12598 main.go:141] libmachine: Creating SSH key...
	I0318 04:02:17.915971   12598 main.go:141] libmachine: Creating Disk image...
	I0318 04:02:17.915980   12598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:02:17.916175   12598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:17.929514   12598 main.go:141] libmachine: STDOUT: 
	I0318 04:02:17.929632   12598 main.go:141] libmachine: STDERR: 
	I0318 04:02:17.929764   12598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2 +20000M
	I0318 04:02:17.948196   12598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:02:17.948218   12598 main.go:141] libmachine: STDERR: 
	I0318 04:02:17.948236   12598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:17.948241   12598 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:02:17.948270   12598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1d:20:c5:39:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:17.950132   12598 main.go:141] libmachine: STDOUT: 
	I0318 04:02:17.950146   12598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:17.950167   12598 client.go:171] duration metric: took 274.32475ms to LocalClient.Create
	I0318 04:02:19.951425   12598 start.go:128] duration metric: took 2.295497833s to createHost
	I0318 04:02:19.951445   12598 start.go:83] releasing machines lock for "offline-docker-654000", held for 2.295573834s
	W0318 04:02:19.951467   12598 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:19.959126   12598 out.go:177] * Deleting "offline-docker-654000" in qemu2 ...
	W0318 04:02:19.968134   12598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:19.968147   12598 start.go:728] Will try again in 5 seconds ...
	I0318 04:02:24.970253   12598 start.go:360] acquireMachinesLock for offline-docker-654000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:24.970628   12598 start.go:364] duration metric: took 279.209µs to acquireMachinesLock for "offline-docker-654000"
	I0318 04:02:24.970779   12598 start.go:93] Provisioning new machine with config: &{Name:offline-docker-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:02:24.971130   12598 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:02:24.980740   12598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:02:25.030467   12598 start.go:159] libmachine.API.Create for "offline-docker-654000" (driver="qemu2")
	I0318 04:02:25.030517   12598 client.go:168] LocalClient.Create starting
	I0318 04:02:25.030622   12598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:02:25.030693   12598 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:25.030708   12598 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:25.030770   12598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:02:25.030811   12598 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:25.030821   12598 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:25.031371   12598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:02:25.190039   12598 main.go:141] libmachine: Creating SSH key...
	I0318 04:02:25.362678   12598 main.go:141] libmachine: Creating Disk image...
	I0318 04:02:25.362688   12598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:02:25.362859   12598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:25.375262   12598 main.go:141] libmachine: STDOUT: 
	I0318 04:02:25.375284   12598 main.go:141] libmachine: STDERR: 
	I0318 04:02:25.375345   12598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2 +20000M
	I0318 04:02:25.385882   12598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:02:25.385909   12598 main.go:141] libmachine: STDERR: 
	I0318 04:02:25.385924   12598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:25.385929   12598 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:02:25.385964   12598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:82:2f:d8:69:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/offline-docker-654000/disk.qcow2
	I0318 04:02:25.387628   12598 main.go:141] libmachine: STDOUT: 
	I0318 04:02:25.387644   12598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:25.387657   12598 client.go:171] duration metric: took 357.142917ms to LocalClient.Create
	I0318 04:02:27.389778   12598 start.go:128] duration metric: took 2.418677333s to createHost
	I0318 04:02:27.389846   12598 start.go:83] releasing machines lock for "offline-docker-654000", held for 2.419245791s
	W0318 04:02:27.390237   12598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:27.401873   12598 out.go:177] 
	W0318 04:02:27.406918   12598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:02:27.406947   12598 out.go:239] * 
	* 
	W0318 04:02:27.409486   12598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:02:27.419898   12598 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-654000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-18 04:02:27.438011 -0700 PDT m=+777.830171459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-654000 -n offline-docker-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-654000 -n offline-docker-654000: exit status 7 (70.114958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-654000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-654000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-654000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-852000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-852000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.370242875s)

                                                
                                                
-- stdout --
	* [addons-852000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-852000" primary control-plane node in "addons-852000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-852000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:50:58.416175   11204 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:50:58.416316   11204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:58.416320   11204 out.go:304] Setting ErrFile to fd 2...
	I0318 03:50:58.416322   11204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:58.416454   11204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:50:58.417541   11204 out.go:298] Setting JSON to false
	I0318 03:50:58.433741   11204 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6631,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:50:58.433808   11204 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:50:58.437403   11204 out.go:177] * [addons-852000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:50:58.444357   11204 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:50:58.447195   11204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:50:58.444405   11204 notify.go:220] Checking for updates...
	I0318 03:50:58.453307   11204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:50:58.454704   11204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:50:58.457308   11204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:50:58.460344   11204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:50:58.463490   11204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:50:58.467270   11204 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 03:50:58.474323   11204 start.go:297] selected driver: qemu2
	I0318 03:50:58.474328   11204 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:50:58.474333   11204 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:50:58.476470   11204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:50:58.479314   11204 out.go:177] * Automatically selected the socket_vmnet network
	I0318 03:50:58.482387   11204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:50:58.482443   11204 cni.go:84] Creating CNI manager for ""
	I0318 03:50:58.482452   11204 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 03:50:58.482456   11204 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 03:50:58.482501   11204 start.go:340] cluster config:
	{Name:addons-852000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-852000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:50:58.486910   11204 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:50:58.494324   11204 out.go:177] * Starting "addons-852000" primary control-plane node in "addons-852000" cluster
	I0318 03:50:58.502376   11204 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:50:58.502390   11204 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:50:58.502400   11204 cache.go:56] Caching tarball of preloaded images
	I0318 03:50:58.502459   11204 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:50:58.502464   11204 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:50:58.502672   11204 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/addons-852000/config.json ...
	I0318 03:50:58.502685   11204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/addons-852000/config.json: {Name:mka40700f141f06a2219565924fedb1774eeb6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:50:58.502923   11204 start.go:360] acquireMachinesLock for addons-852000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:50:58.503125   11204 start.go:364] duration metric: took 195.167µs to acquireMachinesLock for "addons-852000"
	I0318 03:50:58.503143   11204 start.go:93] Provisioning new machine with config: &{Name:addons-852000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-852000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:50:58.503171   11204 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:50:58.511286   11204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 03:50:58.529480   11204 start.go:159] libmachine.API.Create for "addons-852000" (driver="qemu2")
	I0318 03:50:58.529511   11204 client.go:168] LocalClient.Create starting
	I0318 03:50:58.529716   11204 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:50:58.895743   11204 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:50:58.990794   11204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:50:59.240294   11204 main.go:141] libmachine: Creating SSH key...
	I0318 03:50:59.308649   11204 main.go:141] libmachine: Creating Disk image...
	I0318 03:50:59.308654   11204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:50:59.308827   11204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:50:59.321564   11204 main.go:141] libmachine: STDOUT: 
	I0318 03:50:59.321586   11204 main.go:141] libmachine: STDERR: 
	I0318 03:50:59.321643   11204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2 +20000M
	I0318 03:50:59.332333   11204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:50:59.332366   11204 main.go:141] libmachine: STDERR: 
	I0318 03:50:59.332379   11204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:50:59.332384   11204 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:50:59.332413   11204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d5:ac:8d:a0:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:50:59.334124   11204 main.go:141] libmachine: STDOUT: 
	I0318 03:50:59.334146   11204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:50:59.334165   11204 client.go:171] duration metric: took 804.672333ms to LocalClient.Create
	I0318 03:51:01.336288   11204 start.go:128] duration metric: took 2.833180167s to createHost
	I0318 03:51:01.336356   11204 start.go:83] releasing machines lock for "addons-852000", held for 2.833302708s
	W0318 03:51:01.336447   11204 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:01.347770   11204 out.go:177] * Deleting "addons-852000" in qemu2 ...
	W0318 03:51:01.374270   11204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:01.374327   11204 start.go:728] Will try again in 5 seconds ...
	I0318 03:51:06.376442   11204 start.go:360] acquireMachinesLock for addons-852000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:51:06.376891   11204 start.go:364] duration metric: took 324.5µs to acquireMachinesLock for "addons-852000"
	I0318 03:51:06.377019   11204 start.go:93] Provisioning new machine with config: &{Name:addons-852000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-852000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:51:06.377324   11204 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:51:06.388964   11204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 03:51:06.438547   11204 start.go:159] libmachine.API.Create for "addons-852000" (driver="qemu2")
	I0318 03:51:06.438595   11204 client.go:168] LocalClient.Create starting
	I0318 03:51:06.438700   11204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:51:06.438758   11204 main.go:141] libmachine: Decoding PEM data...
	I0318 03:51:06.438775   11204 main.go:141] libmachine: Parsing certificate...
	I0318 03:51:06.438850   11204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:51:06.438895   11204 main.go:141] libmachine: Decoding PEM data...
	I0318 03:51:06.438906   11204 main.go:141] libmachine: Parsing certificate...
	I0318 03:51:06.439461   11204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:51:06.590351   11204 main.go:141] libmachine: Creating SSH key...
	I0318 03:51:06.686116   11204 main.go:141] libmachine: Creating Disk image...
	I0318 03:51:06.686124   11204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:51:06.686325   11204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:51:06.698817   11204 main.go:141] libmachine: STDOUT: 
	I0318 03:51:06.698843   11204 main.go:141] libmachine: STDERR: 
	I0318 03:51:06.698915   11204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2 +20000M
	I0318 03:51:06.709670   11204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:51:06.709692   11204 main.go:141] libmachine: STDERR: 
	I0318 03:51:06.709706   11204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:51:06.709712   11204 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:51:06.709752   11204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:9a:7e:3c:f4:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/addons-852000/disk.qcow2
	I0318 03:51:06.711435   11204 main.go:141] libmachine: STDOUT: 
	I0318 03:51:06.711449   11204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:51:06.711469   11204 client.go:171] duration metric: took 272.876833ms to LocalClient.Create
	I0318 03:51:08.712733   11204 start.go:128] duration metric: took 2.33541425s to createHost
	I0318 03:51:08.712845   11204 start.go:83] releasing machines lock for "addons-852000", held for 2.33599475s
	W0318 03:51:08.713254   11204 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-852000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-852000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:08.721773   11204 out.go:177] 
	W0318 03:51:08.728888   11204 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:51:08.728940   11204 out.go:239] * 
	* 
	W0318 03:51:08.731558   11204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:51:08.739796   11204 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-852000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.37s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-326000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-326000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.7631495s)

                                                
                                                
-- stdout --
	* [cert-options-326000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-326000" primary control-plane node in "cert-options-326000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-326000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-326000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-326000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-326000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.332167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-326000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-326000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-326000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-326000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-326000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-326000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.040834ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-326000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-326000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-326000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-326000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-326000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-18 04:14:33.26996 -0700 PDT m=+1503.721103501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-326000 -n cert-options-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-326000 -n cert-options-326000: exit status 7 (31.082334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-326000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-326000
--- FAIL: TestCertOptions (10.06s)

                                                
                                    
x
+
TestCertExpiration (195.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.921291709s)

                                                
                                                
-- stdout --
	* [cert-expiration-153000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-153000" primary control-plane node in "cert-expiration-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.240351833s)

                                                
                                                
-- stdout --
	* [cert-expiration-153000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-153000" primary control-plane node in "cert-expiration-153000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-153000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-153000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-153000" primary control-plane node in "cert-expiration-153000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-18 04:17:23.33819 -0700 PDT m=+1673.795008459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-153000 -n cert-expiration-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-153000 -n cert-expiration-153000: exit status 7 (70.337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-153000
--- FAIL: TestCertExpiration (195.34s)

                                                
                                    
x
+
TestDockerFlags (10.05s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-487000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-487000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.788524334s)

                                                
                                                
-- stdout --
	* [docker-flags-487000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-487000" primary control-plane node in "docker-flags-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:14:13.317132   13218 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:14:13.317263   13218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:13.317267   13218 out.go:304] Setting ErrFile to fd 2...
	I0318 04:14:13.317269   13218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:13.317397   13218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:14:13.318448   13218 out.go:298] Setting JSON to false
	I0318 04:14:13.334565   13218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8026,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:14:13.334628   13218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:14:13.340467   13218 out.go:177] * [docker-flags-487000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:14:13.347477   13218 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:14:13.352432   13218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:14:13.347497   13218 notify.go:220] Checking for updates...
	I0318 04:14:13.358464   13218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:14:13.361432   13218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:14:13.364444   13218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:14:13.367482   13218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:14:13.370829   13218 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:13.370902   13218 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:13.370955   13218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:14:13.375418   13218 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:14:13.382340   13218 start.go:297] selected driver: qemu2
	I0318 04:14:13.382345   13218 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:14:13.382350   13218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:14:13.384616   13218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:14:13.387437   13218 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:14:13.390525   13218 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0318 04:14:13.390581   13218 cni.go:84] Creating CNI manager for ""
	I0318 04:14:13.390588   13218 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:14:13.390593   13218 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:14:13.390638   13218 start.go:340] cluster config:
	{Name:docker-flags-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:14:13.395264   13218 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:14:13.402368   13218 out.go:177] * Starting "docker-flags-487000" primary control-plane node in "docker-flags-487000" cluster
	I0318 04:14:13.406481   13218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:14:13.406498   13218 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:14:13.406510   13218 cache.go:56] Caching tarball of preloaded images
	I0318 04:14:13.406566   13218 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:14:13.406573   13218 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:14:13.406653   13218 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/docker-flags-487000/config.json ...
	I0318 04:14:13.406675   13218 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/docker-flags-487000/config.json: {Name:mk5048e0cdcbaf5e09fc5d1e8b317a03e771b68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:14:13.406891   13218 start.go:360] acquireMachinesLock for docker-flags-487000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:13.406926   13218 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "docker-flags-487000"
	I0318 04:14:13.406940   13218 start.go:93] Provisioning new machine with config: &{Name:docker-flags-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:13.406968   13218 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:13.411511   13218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:14:13.430061   13218 start.go:159] libmachine.API.Create for "docker-flags-487000" (driver="qemu2")
	I0318 04:14:13.430097   13218 client.go:168] LocalClient.Create starting
	I0318 04:14:13.430163   13218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:13.430192   13218 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:13.430203   13218 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:13.430255   13218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:13.430278   13218 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:13.430287   13218 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:13.430721   13218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:13.571684   13218 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:13.655834   13218 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:13.655839   13218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:13.656041   13218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:13.668790   13218 main.go:141] libmachine: STDOUT: 
	I0318 04:14:13.668806   13218 main.go:141] libmachine: STDERR: 
	I0318 04:14:13.668866   13218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2 +20000M
	I0318 04:14:13.679949   13218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:13.679970   13218 main.go:141] libmachine: STDERR: 
	I0318 04:14:13.679983   13218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:13.679987   13218 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:13.680022   13218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c6:55:47:4f:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:13.681864   13218 main.go:141] libmachine: STDOUT: 
	I0318 04:14:13.681878   13218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:13.681898   13218 client.go:171] duration metric: took 251.803542ms to LocalClient.Create
	I0318 04:14:15.684059   13218 start.go:128] duration metric: took 2.277142s to createHost
	I0318 04:14:15.684152   13218 start.go:83] releasing machines lock for "docker-flags-487000", held for 2.277291834s
	W0318 04:14:15.684220   13218 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:15.694318   13218 out.go:177] * Deleting "docker-flags-487000" in qemu2 ...
	W0318 04:14:15.733599   13218 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:15.733628   13218 start.go:728] Will try again in 5 seconds ...
	I0318 04:14:20.733993   13218 start.go:360] acquireMachinesLock for docker-flags-487000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:20.734551   13218 start.go:364] duration metric: took 345.583µs to acquireMachinesLock for "docker-flags-487000"
	I0318 04:14:20.734660   13218 start.go:93] Provisioning new machine with config: &{Name:docker-flags-487000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:20.734952   13218 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:20.740012   13218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:14:20.790294   13218 start.go:159] libmachine.API.Create for "docker-flags-487000" (driver="qemu2")
	I0318 04:14:20.790341   13218 client.go:168] LocalClient.Create starting
	I0318 04:14:20.790424   13218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:20.790473   13218 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:20.790490   13218 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:20.790584   13218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:20.790619   13218 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:20.790635   13218 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:20.791157   13218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:20.942968   13218 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:20.999081   13218 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:20.999087   13218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:20.999283   13218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:21.011474   13218 main.go:141] libmachine: STDOUT: 
	I0318 04:14:21.011493   13218 main.go:141] libmachine: STDERR: 
	I0318 04:14:21.011542   13218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2 +20000M
	I0318 04:14:21.022285   13218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:21.022314   13218 main.go:141] libmachine: STDERR: 
	I0318 04:14:21.022326   13218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:21.022330   13218 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:21.022362   13218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:fc:72:c4:44:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/docker-flags-487000/disk.qcow2
	I0318 04:14:21.024053   13218 main.go:141] libmachine: STDOUT: 
	I0318 04:14:21.024070   13218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:21.024084   13218 client.go:171] duration metric: took 233.74675ms to LocalClient.Create
	I0318 04:14:23.026179   13218 start.go:128] duration metric: took 2.2912745s to createHost
	I0318 04:14:23.026230   13218 start.go:83] releasing machines lock for "docker-flags-487000", held for 2.291727417s
	W0318 04:14:23.026613   13218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:23.041227   13218 out.go:177] 
	W0318 04:14:23.045360   13218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:14:23.045406   13218 out.go:239] * 
	* 
	W0318 04:14:23.048067   13218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:14:23.060250   13218 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-487000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.744708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-487000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-487000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-487000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-487000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-487000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-487000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-487000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-487000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.725875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-487000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-487000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-487000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-487000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-487000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-487000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-18 04:14:23.205254 -0700 PDT m=+1493.656061417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-487000 -n docker-flags-487000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-487000 -n docker-flags-487000: exit status 7 (31.970667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-487000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-487000
--- FAIL: TestDockerFlags (10.05s)

                                                
                                    
x
+
TestForceSystemdFlag (10.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-805000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-805000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.86348675s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-805000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-805000" primary control-plane node in "force-systemd-flag-805000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-805000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:13:42.205706   13091 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:13:42.205846   13091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:13:42.205850   13091 out.go:304] Setting ErrFile to fd 2...
	I0318 04:13:42.205852   13091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:13:42.205961   13091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:13:42.207017   13091 out.go:298] Setting JSON to false
	I0318 04:13:42.223098   13091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7995,"bootTime":1710752427,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:13:42.223152   13091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:13:42.227493   13091 out.go:177] * [force-systemd-flag-805000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:13:42.234457   13091 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:13:42.241521   13091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:13:42.234490   13091 notify.go:220] Checking for updates...
	I0318 04:13:42.244386   13091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:13:42.247457   13091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:13:42.250470   13091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:13:42.251771   13091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:13:42.254843   13091 config.go:182] Loaded profile config "NoKubernetes-112000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0318 04:13:42.254918   13091 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:13:42.254962   13091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:13:42.259426   13091 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:13:42.264437   13091 start.go:297] selected driver: qemu2
	I0318 04:13:42.264444   13091 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:13:42.264450   13091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:13:42.266765   13091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:13:42.271498   13091 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:13:42.274569   13091 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:13:42.274619   13091 cni.go:84] Creating CNI manager for ""
	I0318 04:13:42.274628   13091 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:13:42.274638   13091 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:13:42.274691   13091 start.go:340] cluster config:
	{Name:force-systemd-flag-805000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-805000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:13:42.279194   13091 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:13:42.287481   13091 out.go:177] * Starting "force-systemd-flag-805000" primary control-plane node in "force-systemd-flag-805000" cluster
	I0318 04:13:42.291485   13091 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:13:42.291503   13091 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:13:42.291513   13091 cache.go:56] Caching tarball of preloaded images
	I0318 04:13:42.291580   13091 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:13:42.291587   13091 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:13:42.291665   13091 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/force-systemd-flag-805000/config.json ...
	I0318 04:13:42.291680   13091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/force-systemd-flag-805000/config.json: {Name:mk52f6eb547b49f612707c412a2e576e1ea286b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:13:42.291906   13091 start.go:360] acquireMachinesLock for force-systemd-flag-805000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:13:42.291943   13091 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "force-systemd-flag-805000"
	I0318 04:13:42.291957   13091 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-805000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:13:42.291992   13091 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:13:42.299426   13091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:13:42.317878   13091 start.go:159] libmachine.API.Create for "force-systemd-flag-805000" (driver="qemu2")
	I0318 04:13:42.317903   13091 client.go:168] LocalClient.Create starting
	I0318 04:13:42.317971   13091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:13:42.318004   13091 main.go:141] libmachine: Decoding PEM data...
	I0318 04:13:42.318015   13091 main.go:141] libmachine: Parsing certificate...
	I0318 04:13:42.318062   13091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:13:42.318090   13091 main.go:141] libmachine: Decoding PEM data...
	I0318 04:13:42.318101   13091 main.go:141] libmachine: Parsing certificate...
	I0318 04:13:42.318511   13091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:13:42.461730   13091 main.go:141] libmachine: Creating SSH key...
	I0318 04:13:42.628360   13091 main.go:141] libmachine: Creating Disk image...
	I0318 04:13:42.628368   13091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:13:42.628541   13091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:42.640931   13091 main.go:141] libmachine: STDOUT: 
	I0318 04:13:42.640946   13091 main.go:141] libmachine: STDERR: 
	I0318 04:13:42.640996   13091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2 +20000M
	I0318 04:13:42.651583   13091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:13:42.651602   13091 main.go:141] libmachine: STDERR: 
	I0318 04:13:42.651616   13091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:42.651622   13091 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:13:42.651660   13091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ba:da:00:7a:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:42.653393   13091 main.go:141] libmachine: STDOUT: 
	I0318 04:13:42.653407   13091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:13:42.653429   13091 client.go:171] duration metric: took 335.530333ms to LocalClient.Create
	I0318 04:13:44.654487   13091 start.go:128] duration metric: took 2.362544167s to createHost
	I0318 04:13:44.654583   13091 start.go:83] releasing machines lock for "force-systemd-flag-805000", held for 2.362695708s
	W0318 04:13:44.654681   13091 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:13:44.664424   13091 out.go:177] * Deleting "force-systemd-flag-805000" in qemu2 ...
	W0318 04:13:44.687729   13091 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:13:44.687759   13091 start.go:728] Will try again in 5 seconds ...
	I0318 04:13:49.689803   13091 start.go:360] acquireMachinesLock for force-systemd-flag-805000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:13:49.690220   13091 start.go:364] duration metric: took 292.917µs to acquireMachinesLock for "force-systemd-flag-805000"
	I0318 04:13:49.690293   13091 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-805000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:13:49.690647   13091 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:13:49.698380   13091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:13:49.750325   13091 start.go:159] libmachine.API.Create for "force-systemd-flag-805000" (driver="qemu2")
	I0318 04:13:49.750366   13091 client.go:168] LocalClient.Create starting
	I0318 04:13:49.750461   13091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:13:49.750525   13091 main.go:141] libmachine: Decoding PEM data...
	I0318 04:13:49.750543   13091 main.go:141] libmachine: Parsing certificate...
	I0318 04:13:49.750600   13091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:13:49.750641   13091 main.go:141] libmachine: Decoding PEM data...
	I0318 04:13:49.750652   13091 main.go:141] libmachine: Parsing certificate...
	I0318 04:13:49.751164   13091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:13:49.903341   13091 main.go:141] libmachine: Creating SSH key...
	I0318 04:13:49.969440   13091 main.go:141] libmachine: Creating Disk image...
	I0318 04:13:49.969446   13091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:13:49.969643   13091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:49.981984   13091 main.go:141] libmachine: STDOUT: 
	I0318 04:13:49.982004   13091 main.go:141] libmachine: STDERR: 
	I0318 04:13:49.982058   13091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2 +20000M
	I0318 04:13:49.992810   13091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:13:49.992824   13091 main.go:141] libmachine: STDERR: 
	I0318 04:13:49.992841   13091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:49.992846   13091 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:13:49.992891   13091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:52:46:70:3c:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-flag-805000/disk.qcow2
	I0318 04:13:49.994646   13091 main.go:141] libmachine: STDOUT: 
	I0318 04:13:49.994660   13091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:13:49.994675   13091 client.go:171] duration metric: took 244.313459ms to LocalClient.Create
	I0318 04:13:51.996779   13091 start.go:128] duration metric: took 2.306177s to createHost
	I0318 04:13:51.996818   13091 start.go:83] releasing machines lock for "force-systemd-flag-805000", held for 2.306649292s
	W0318 04:13:51.997133   13091 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-805000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-805000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:13:52.005660   13091 out.go:177] 
	W0318 04:13:52.011812   13091 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:13:52.011836   13091 out.go:239] * 
	* 
	W0318 04:13:52.014763   13091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:13:52.022581   13091 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-805000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-805000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-805000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.928667ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-805000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-805000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-805000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-18 04:13:52.121864 -0700 PDT m=+1462.571634376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-805000 -n force-systemd-flag-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-805000 -n force-systemd-flag-805000: exit status 7 (35.134459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-805000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-805000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-805000
--- FAIL: TestForceSystemdFlag (10.08s)

                                                
                                    
x
+
TestForceSystemdEnv (10.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-015000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-015000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.320983042s)

                                                
                                                
-- stdout --
	* [force-systemd-env-015000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-015000" primary control-plane node in "force-systemd-env-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:14:02.775472   13163 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:14:02.775600   13163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:02.775607   13163 out.go:304] Setting ErrFile to fd 2...
	I0318 04:14:02.775609   13163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:02.775737   13163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:14:02.776857   13163 out.go:298] Setting JSON to false
	I0318 04:14:02.792921   13163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8015,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:14:02.792990   13163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:14:02.799101   13163 out.go:177] * [force-systemd-env-015000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:14:02.807247   13163 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:14:02.807302   13163 notify.go:220] Checking for updates...
	I0318 04:14:02.815179   13163 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:14:02.818208   13163 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:14:02.821168   13163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:14:02.824210   13163 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:14:02.827246   13163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0318 04:14:02.828949   13163 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:02.829000   13163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:14:02.833164   13163 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:14:02.840206   13163 start.go:297] selected driver: qemu2
	I0318 04:14:02.840212   13163 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:14:02.840222   13163 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:14:02.842512   13163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:14:02.847153   13163 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:14:02.851314   13163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:14:02.851363   13163 cni.go:84] Creating CNI manager for ""
	I0318 04:14:02.851371   13163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:14:02.851375   13163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:14:02.851413   13163 start.go:340] cluster config:
	{Name:force-systemd-env-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:14:02.856092   13163 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:14:02.864181   13163 out.go:177] * Starting "force-systemd-env-015000" primary control-plane node in "force-systemd-env-015000" cluster
	I0318 04:14:02.868150   13163 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:14:02.868168   13163 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:14:02.868179   13163 cache.go:56] Caching tarball of preloaded images
	I0318 04:14:02.868249   13163 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:14:02.868255   13163 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:14:02.868315   13163 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/force-systemd-env-015000/config.json ...
	I0318 04:14:02.868332   13163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/force-systemd-env-015000/config.json: {Name:mkffa97f424c0b3363bc89106178d969f4c69ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:14:02.868550   13163 start.go:360] acquireMachinesLock for force-systemd-env-015000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:02.868585   13163 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "force-systemd-env-015000"
	I0318 04:14:02.868600   13163 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:02.868631   13163 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:02.873228   13163 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:14:02.890124   13163 start.go:159] libmachine.API.Create for "force-systemd-env-015000" (driver="qemu2")
	I0318 04:14:02.890155   13163 client.go:168] LocalClient.Create starting
	I0318 04:14:02.890215   13163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:02.890261   13163 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:02.890272   13163 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:02.890316   13163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:02.890338   13163 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:02.890345   13163 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:02.890693   13163 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:03.030790   13163 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:03.147757   13163 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:03.147765   13163 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:03.147962   13163 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:03.160818   13163 main.go:141] libmachine: STDOUT: 
	I0318 04:14:03.160840   13163 main.go:141] libmachine: STDERR: 
	I0318 04:14:03.160896   13163 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2 +20000M
	I0318 04:14:03.171984   13163 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:03.172004   13163 main.go:141] libmachine: STDERR: 
	I0318 04:14:03.172023   13163 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:03.172027   13163 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:03.172057   13163 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c0:63:d1:3c:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:03.173831   13163 main.go:141] libmachine: STDOUT: 
	I0318 04:14:03.173845   13163 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:03.173872   13163 client.go:171] duration metric: took 283.722125ms to LocalClient.Create
	I0318 04:14:05.175406   13163 start.go:128] duration metric: took 2.306845791s to createHost
	I0318 04:14:05.175422   13163 start.go:83] releasing machines lock for "force-systemd-env-015000", held for 2.306909792s
	W0318 04:14:05.175442   13163 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:05.183299   13163 out.go:177] * Deleting "force-systemd-env-015000" in qemu2 ...
	W0318 04:14:05.192341   13163 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:05.192362   13163 start.go:728] Will try again in 5 seconds ...
	I0318 04:14:10.194435   13163 start.go:360] acquireMachinesLock for force-systemd-env-015000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:10.685398   13163 start.go:364] duration metric: took 490.867792ms to acquireMachinesLock for "force-systemd-env-015000"
	I0318 04:14:10.685551   13163 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:10.685806   13163 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:10.690565   13163 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:14:10.737910   13163 start.go:159] libmachine.API.Create for "force-systemd-env-015000" (driver="qemu2")
	I0318 04:14:10.737969   13163 client.go:168] LocalClient.Create starting
	I0318 04:14:10.738078   13163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:10.738134   13163 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:10.738151   13163 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:10.738206   13163 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:10.738246   13163 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:10.738256   13163 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:10.738903   13163 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:10.888830   13163 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:10.982806   13163 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:10.982811   13163 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:10.983001   13163 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:11.000041   13163 main.go:141] libmachine: STDOUT: 
	I0318 04:14:11.000062   13163 main.go:141] libmachine: STDERR: 
	I0318 04:14:11.000112   13163 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2 +20000M
	I0318 04:14:11.010786   13163 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:11.010802   13163 main.go:141] libmachine: STDERR: 
	I0318 04:14:11.010813   13163 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:11.010818   13163 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:11.010849   13163 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:49:a6:0e:da:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/force-systemd-env-015000/disk.qcow2
	I0318 04:14:11.012596   13163 main.go:141] libmachine: STDOUT: 
	I0318 04:14:11.012613   13163 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:11.012626   13163 client.go:171] duration metric: took 274.660209ms to LocalClient.Create
	I0318 04:14:13.013181   13163 start.go:128] duration metric: took 2.327395291s to createHost
	I0318 04:14:13.013245   13163 start.go:83] releasing machines lock for "force-systemd-env-015000", held for 2.327893042s
	W0318 04:14:13.013622   13163 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:13.030012   13163 out.go:177] 
	W0318 04:14:13.037177   13163 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:14:13.037207   13163 out.go:239] * 
	* 
	W0318 04:14:13.040143   13163 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:14:13.050120   13163 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-015000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-015000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-015000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.009584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-015000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-015000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-015000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-18 04:14:13.147629 -0700 PDT m=+1483.598101042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-015000 -n force-systemd-env-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-015000 -n force-systemd-env-015000: exit status 7 (35.277333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-015000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-015000
--- FAIL: TestForceSystemdEnv (10.54s)

                                                
                                    
x
+
TestErrorSpam/setup (9.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-289000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-289000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 --driver=qemu2 : exit status 80 (9.759683333s)

                                                
                                                
-- stdout --
	* [nospam-289000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-289000" primary control-plane node in "nospam-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-289000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-289000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-289000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18431
- KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-289000" primary control-plane node in "nospam-289000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-289000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.76s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-162000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.791916542s)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-162000" primary control-plane node in "functional-162000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-162000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-162000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18431
- KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-162000" primary control-plane node in "functional-162000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-162000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52112 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (70.614542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.86s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-162000 --alsologtostderr -v=8: exit status 80 (5.188224541s)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-162000" primary control-plane node in "functional-162000" cluster
	* Restarting existing qemu2 VM for "functional-162000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-162000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:51:39.060837   11358 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:51:39.060964   11358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:51:39.061103   11358 out.go:304] Setting ErrFile to fd 2...
	I0318 03:51:39.061175   11358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:51:39.061823   11358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:51:39.063199   11358 out.go:298] Setting JSON to false
	I0318 03:51:39.079556   11358 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6672,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:51:39.079616   11358 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:51:39.084001   11358 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:51:39.091197   11358 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:51:39.094126   11358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:51:39.091231   11358 notify.go:220] Checking for updates...
	I0318 03:51:39.098097   11358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:51:39.102106   11358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:51:39.105145   11358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:51:39.108164   11358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:51:39.111414   11358 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:51:39.111477   11358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:51:39.116121   11358 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 03:51:39.123114   11358 start.go:297] selected driver: qemu2
	I0318 03:51:39.123120   11358 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:51:39.123180   11358 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:51:39.125408   11358 cni.go:84] Creating CNI manager for ""
	I0318 03:51:39.125426   11358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 03:51:39.125474   11358 start.go:340] cluster config:
	{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:51:39.130006   11358 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:51:39.137184   11358 out.go:177] * Starting "functional-162000" primary control-plane node in "functional-162000" cluster
	I0318 03:51:39.141152   11358 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:51:39.141168   11358 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:51:39.141184   11358 cache.go:56] Caching tarball of preloaded images
	I0318 03:51:39.141248   11358 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:51:39.141254   11358 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:51:39.141336   11358 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/functional-162000/config.json ...
	I0318 03:51:39.141821   11358 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:51:39.141847   11358 start.go:364] duration metric: took 20.625µs to acquireMachinesLock for "functional-162000"
	I0318 03:51:39.141857   11358 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:51:39.141864   11358 fix.go:54] fixHost starting: 
	I0318 03:51:39.141982   11358 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
	W0318 03:51:39.141991   11358 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:51:39.146145   11358 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
	I0318 03:51:39.154112   11358 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
	I0318 03:51:39.156116   11358 main.go:141] libmachine: STDOUT: 
	I0318 03:51:39.156142   11358 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:51:39.156172   11358 fix.go:56] duration metric: took 14.309167ms for fixHost
	I0318 03:51:39.156177   11358 start.go:83] releasing machines lock for "functional-162000", held for 14.326041ms
	W0318 03:51:39.156184   11358 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:51:39.156220   11358 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:39.156224   11358 start.go:728] Will try again in 5 seconds ...
	I0318 03:51:44.157132   11358 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:51:44.157570   11358 start.go:364] duration metric: took 378.875µs to acquireMachinesLock for "functional-162000"
	I0318 03:51:44.157699   11358 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:51:44.157724   11358 fix.go:54] fixHost starting: 
	I0318 03:51:44.158370   11358 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
	W0318 03:51:44.158395   11358 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:51:44.162843   11358 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
	I0318 03:51:44.170870   11358 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
	I0318 03:51:44.180447   11358 main.go:141] libmachine: STDOUT: 
	I0318 03:51:44.180507   11358 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:51:44.180574   11358 fix.go:56] duration metric: took 22.856708ms for fixHost
	I0318 03:51:44.180604   11358 start.go:83] releasing machines lock for "functional-162000", held for 23.004958ms
	W0318 03:51:44.180762   11358 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:44.187763   11358 out.go:177] 
	W0318 03:51:44.191802   11358 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:51:44.191826   11358 out.go:239] * 
	* 
	W0318 03:51:44.194283   11358 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:51:44.202692   11358 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-162000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.190025167s for "functional-162000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (69.919292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.99775ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-162000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (32.113875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-162000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-162000 get po -A: exit status 1 (25.844459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-162000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-162000\n"*: args "kubectl --context functional-162000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-162000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (32.328291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl images: exit status 83 (42.79875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (46.945541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-162000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.826792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.806167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 kubectl -- --context functional-162000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 kubectl -- --context functional-162000 get pods: exit status 1 (520.799083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-162000
	* no server found for cluster "functional-162000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-162000 kubectl -- --context functional-162000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (33.984542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-162000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-162000 get pods: exit status 1 (681.738667ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-162000
	* no server found for cluster "functional-162000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-162000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (31.014292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-162000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.187030625s)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-162000" primary control-plane node in "functional-162000" cluster
	* Restarting existing qemu2 VM for "functional-162000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-162000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-162000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.187600208s for "functional-162000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (71.387084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-162000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-162000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.000625ms)

                                                
                                                
** stderr ** 
	error: context "functional-162000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-162000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (32.460042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 logs: exit status 83 (80.742917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |                     |
	|         | -p download-only-983000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | -o=json --download-only                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | -p download-only-875000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | -o=json --download-only                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | -p download-only-053000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | --download-only -p                                                       | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | binary-mirror-389000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52076                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-389000                                                  | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| addons  | disable dashboard -p                                                     | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | addons-852000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | addons-852000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-852000 --wait=true                                             | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-852000                                                         | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	| start   | -p nospam-289000 -n=1 --memory=2250 --wait=false                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-289000                                                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
	| cache   | functional-162000 cache delete                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	| ssh     | functional-162000 ssh sudo                                               | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-162000                                                        | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-162000 cache reload                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-162000 kubectl --                                             | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | --context functional-162000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 03:51:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 03:51:53.392800   11437 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:51:53.392913   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:51:53.392915   11437 out.go:304] Setting ErrFile to fd 2...
	I0318 03:51:53.392917   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:51:53.393031   11437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:51:53.393991   11437 out.go:298] Setting JSON to false
	I0318 03:51:53.409805   11437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6686,"bootTime":1710752427,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:51:53.409852   11437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:51:53.415060   11437 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:51:53.424078   11437 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:51:53.428013   11437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:51:53.424099   11437 notify.go:220] Checking for updates...
	I0318 03:51:53.436017   11437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:51:53.439106   11437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:51:53.442001   11437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:51:53.445012   11437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:51:53.448349   11437 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:51:53.448400   11437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:51:53.453020   11437 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 03:51:53.460026   11437 start.go:297] selected driver: qemu2
	I0318 03:51:53.460030   11437 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:51:53.460100   11437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:51:53.462478   11437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:51:53.462522   11437 cni.go:84] Creating CNI manager for ""
	I0318 03:51:53.462528   11437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 03:51:53.462568   11437 start.go:340] cluster config:
	{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:51:53.467174   11437 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:51:53.473990   11437 out.go:177] * Starting "functional-162000" primary control-plane node in "functional-162000" cluster
	I0318 03:51:53.477900   11437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:51:53.477913   11437 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:51:53.477921   11437 cache.go:56] Caching tarball of preloaded images
	I0318 03:51:53.477976   11437 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:51:53.477980   11437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:51:53.478051   11437 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/functional-162000/config.json ...
	I0318 03:51:53.478522   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:51:53.478553   11437 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "functional-162000"
	I0318 03:51:53.478561   11437 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:51:53.478566   11437 fix.go:54] fixHost starting: 
	I0318 03:51:53.478688   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
	W0318 03:51:53.478695   11437 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:51:53.487034   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
	I0318 03:51:53.490144   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
	I0318 03:51:53.492191   11437 main.go:141] libmachine: STDOUT: 
	I0318 03:51:53.492208   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:51:53.492236   11437 fix.go:56] duration metric: took 13.671042ms for fixHost
	I0318 03:51:53.492240   11437 start.go:83] releasing machines lock for "functional-162000", held for 13.684958ms
	W0318 03:51:53.492245   11437 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:51:53.492274   11437 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:53.492279   11437 start.go:728] Will try again in 5 seconds ...
	I0318 03:51:58.492481   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:51:58.492834   11437 start.go:364] duration metric: took 295.959µs to acquireMachinesLock for "functional-162000"
	I0318 03:51:58.492997   11437 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:51:58.493016   11437 fix.go:54] fixHost starting: 
	I0318 03:51:58.493756   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
	W0318 03:51:58.493777   11437 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:51:58.499267   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
	I0318 03:51:58.503310   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
	I0318 03:51:58.513703   11437 main.go:141] libmachine: STDOUT: 
	I0318 03:51:58.513775   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:51:58.513894   11437 fix.go:56] duration metric: took 20.884416ms for fixHost
	I0318 03:51:58.513911   11437 start.go:83] releasing machines lock for "functional-162000", held for 21.02925ms
	W0318 03:51:58.514126   11437 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:51:58.522958   11437 out.go:177] 
	W0318 03:51:58.527204   11437 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:51:58.527226   11437 out.go:239] * 
	W0318 03:51:58.529729   11437 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:51:58.537239   11437 out.go:177] 
	
	
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-162000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |                     |
|         | -p download-only-983000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | -o=json --download-only                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | -p download-only-875000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | -o=json --download-only                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | -p download-only-053000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | --download-only -p                                                       | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | binary-mirror-389000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52076                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-389000                                                  | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| addons  | disable dashboard -p                                                     | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | addons-852000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | addons-852000                                                            |                      |         |         |                     |                     |
| start   | -p addons-852000 --wait=true                                             | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-852000                                                         | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| start   | -p nospam-289000 -n=1 --memory=2250 --wait=false                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-289000                                                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
| cache   | functional-162000 cache delete                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| ssh     | functional-162000 ssh sudo                                               | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-162000                                                        | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-162000 cache reload                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-162000 kubectl --                                             | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --context functional-162000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 03:51:53
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 03:51:53.392800   11437 out.go:291] Setting OutFile to fd 1 ...
I0318 03:51:53.392913   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:53.392915   11437 out.go:304] Setting ErrFile to fd 2...
I0318 03:51:53.392917   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:53.393031   11437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:51:53.393991   11437 out.go:298] Setting JSON to false
I0318 03:51:53.409805   11437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6686,"bootTime":1710752427,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 03:51:53.409852   11437 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 03:51:53.415060   11437 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 03:51:53.424078   11437 out.go:177]   - MINIKUBE_LOCATION=18431
I0318 03:51:53.428013   11437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
I0318 03:51:53.424099   11437 notify.go:220] Checking for updates...
I0318 03:51:53.436017   11437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 03:51:53.439106   11437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 03:51:53.442001   11437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
I0318 03:51:53.445012   11437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 03:51:53.448349   11437 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:51:53.448400   11437 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 03:51:53.453020   11437 out.go:177] * Using the qemu2 driver based on existing profile
I0318 03:51:53.460026   11437 start.go:297] selected driver: qemu2
I0318 03:51:53.460030   11437 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 03:51:53.460100   11437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 03:51:53.462478   11437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 03:51:53.462522   11437 cni.go:84] Creating CNI manager for ""
I0318 03:51:53.462528   11437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 03:51:53.462568   11437 start.go:340] cluster config:
{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 03:51:53.467174   11437 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 03:51:53.473990   11437 out.go:177] * Starting "functional-162000" primary control-plane node in "functional-162000" cluster
I0318 03:51:53.477900   11437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 03:51:53.477913   11437 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 03:51:53.477921   11437 cache.go:56] Caching tarball of preloaded images
I0318 03:51:53.477976   11437 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 03:51:53.477980   11437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 03:51:53.478051   11437 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/functional-162000/config.json ...
I0318 03:51:53.478522   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 03:51:53.478553   11437 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "functional-162000"
I0318 03:51:53.478561   11437 start.go:96] Skipping create...Using existing machine configuration
I0318 03:51:53.478566   11437 fix.go:54] fixHost starting: 
I0318 03:51:53.478688   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
W0318 03:51:53.478695   11437 fix.go:138] unexpected machine state, will restart: <nil>
I0318 03:51:53.487034   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
I0318 03:51:53.490144   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
I0318 03:51:53.492191   11437 main.go:141] libmachine: STDOUT: 
I0318 03:51:53.492208   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 03:51:53.492236   11437 fix.go:56] duration metric: took 13.671042ms for fixHost
I0318 03:51:53.492240   11437 start.go:83] releasing machines lock for "functional-162000", held for 13.684958ms
W0318 03:51:53.492245   11437 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 03:51:53.492274   11437 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 03:51:53.492279   11437 start.go:728] Will try again in 5 seconds ...
I0318 03:51:58.492481   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 03:51:58.492834   11437 start.go:364] duration metric: took 295.959µs to acquireMachinesLock for "functional-162000"
I0318 03:51:58.492997   11437 start.go:96] Skipping create...Using existing machine configuration
I0318 03:51:58.493016   11437 fix.go:54] fixHost starting: 
I0318 03:51:58.493756   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
W0318 03:51:58.493777   11437 fix.go:138] unexpected machine state, will restart: <nil>
I0318 03:51:58.499267   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
I0318 03:51:58.503310   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
I0318 03:51:58.513703   11437 main.go:141] libmachine: STDOUT: 
I0318 03:51:58.513775   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 03:51:58.513894   11437 fix.go:56] duration metric: took 20.884416ms for fixHost
I0318 03:51:58.513911   11437 start.go:83] releasing machines lock for "functional-162000", held for 21.02925ms
W0318 03:51:58.514126   11437 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 03:51:58.522958   11437 out.go:177] 
W0318 03:51:58.527204   11437 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 03:51:58.527226   11437 out.go:239] * 
W0318 03:51:58.529729   11437 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 03:51:58.537239   11437 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1801460851/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |                     |
|         | -p download-only-983000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | -o=json --download-only                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | -p download-only-875000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | -o=json --download-only                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | -p download-only-053000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-983000                                                  | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-875000                                                  | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| delete  | -p download-only-053000                                                  | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| start   | --download-only -p                                                       | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | binary-mirror-389000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52076                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-389000                                                  | binary-mirror-389000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
| addons  | disable dashboard -p                                                     | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | addons-852000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | addons-852000                                                            |                      |         |         |                     |                     |
| start   | -p addons-852000 --wait=true                                             | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-852000                                                         | addons-852000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| start   | -p nospam-289000 -n=1 --memory=2250 --wait=false                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-289000 --log_dir                                                  | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-289000                                                         | nospam-289000        | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-162000 cache add                                              | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
| cache   | functional-162000 cache delete                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | minikube-local-cache-test:functional-162000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| ssh     | functional-162000 ssh sudo                                               | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-162000                                                        | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-162000 cache reload                                           | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
| ssh     | functional-162000 ssh                                                    | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT | 18 Mar 24 03:51 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-162000 kubectl --                                             | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --context functional-162000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-162000                                                     | functional-162000    | jenkins | v1.32.0 | 18 Mar 24 03:51 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 03:51:53
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 03:51:53.392800   11437 out.go:291] Setting OutFile to fd 1 ...
I0318 03:51:53.392913   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:53.392915   11437 out.go:304] Setting ErrFile to fd 2...
I0318 03:51:53.392917   11437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:53.393031   11437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:51:53.393991   11437 out.go:298] Setting JSON to false
I0318 03:51:53.409805   11437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6686,"bootTime":1710752427,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 03:51:53.409852   11437 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 03:51:53.415060   11437 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 03:51:53.424078   11437 out.go:177]   - MINIKUBE_LOCATION=18431
I0318 03:51:53.428013   11437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
I0318 03:51:53.424099   11437 notify.go:220] Checking for updates...
I0318 03:51:53.436017   11437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 03:51:53.439106   11437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 03:51:53.442001   11437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
I0318 03:51:53.445012   11437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 03:51:53.448349   11437 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:51:53.448400   11437 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 03:51:53.453020   11437 out.go:177] * Using the qemu2 driver based on existing profile
I0318 03:51:53.460026   11437 start.go:297] selected driver: qemu2
I0318 03:51:53.460030   11437 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 03:51:53.460100   11437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 03:51:53.462478   11437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 03:51:53.462522   11437 cni.go:84] Creating CNI manager for ""
I0318 03:51:53.462528   11437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 03:51:53.462568   11437 start.go:340] cluster config:
{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 03:51:53.467174   11437 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 03:51:53.473990   11437 out.go:177] * Starting "functional-162000" primary control-plane node in "functional-162000" cluster
I0318 03:51:53.477900   11437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 03:51:53.477913   11437 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 03:51:53.477921   11437 cache.go:56] Caching tarball of preloaded images
I0318 03:51:53.477976   11437 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 03:51:53.477980   11437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 03:51:53.478051   11437 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/functional-162000/config.json ...
I0318 03:51:53.478522   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 03:51:53.478553   11437 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "functional-162000"
I0318 03:51:53.478561   11437 start.go:96] Skipping create...Using existing machine configuration
I0318 03:51:53.478566   11437 fix.go:54] fixHost starting: 
I0318 03:51:53.478688   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
W0318 03:51:53.478695   11437 fix.go:138] unexpected machine state, will restart: <nil>
I0318 03:51:53.487034   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
I0318 03:51:53.490144   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
I0318 03:51:53.492191   11437 main.go:141] libmachine: STDOUT: 
I0318 03:51:53.492208   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 03:51:53.492236   11437 fix.go:56] duration metric: took 13.671042ms for fixHost
I0318 03:51:53.492240   11437 start.go:83] releasing machines lock for "functional-162000", held for 13.684958ms
W0318 03:51:53.492245   11437 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 03:51:53.492274   11437 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 03:51:53.492279   11437 start.go:728] Will try again in 5 seconds ...
I0318 03:51:58.492481   11437 start.go:360] acquireMachinesLock for functional-162000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 03:51:58.492834   11437 start.go:364] duration metric: took 295.959µs to acquireMachinesLock for "functional-162000"
I0318 03:51:58.492997   11437 start.go:96] Skipping create...Using existing machine configuration
I0318 03:51:58.493016   11437 fix.go:54] fixHost starting: 
I0318 03:51:58.493756   11437 fix.go:112] recreateIfNeeded on functional-162000: state=Stopped err=<nil>
W0318 03:51:58.493777   11437 fix.go:138] unexpected machine state, will restart: <nil>
I0318 03:51:58.499267   11437 out.go:177] * Restarting existing qemu2 VM for "functional-162000" ...
I0318 03:51:58.503310   11437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:44:a5:a5:e0:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/functional-162000/disk.qcow2
I0318 03:51:58.513703   11437 main.go:141] libmachine: STDOUT: 
I0318 03:51:58.513775   11437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 03:51:58.513894   11437 fix.go:56] duration metric: took 20.884416ms for fixHost
I0318 03:51:58.513911   11437 start.go:83] releasing machines lock for "functional-162000", held for 21.02925ms
W0318 03:51:58.514126   11437 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-162000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 03:51:58.522958   11437 out.go:177] 
W0318 03:51:58.527204   11437 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 03:51:58.527226   11437 out.go:239] * 
W0318 03:51:58.529729   11437 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 03:51:58.537239   11437 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-162000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.288875ms)

                                                
                                                
** stderr ** 
	error: context "functional-162000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-162000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1] stderr:
I0318 03:52:35.551550   11645 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:35.552117   11645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:35.552128   11645 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:35.552131   11645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:35.552264   11645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:35.552495   11645 mustload.go:65] Loading cluster: functional-162000
I0318 03:52:35.552688   11645 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:35.556782   11645 out.go:177] * The control-plane node functional-162000 host is not running: state=Stopped
I0318 03:52:35.560916   11645 out.go:177]   To start a cluster, run: "minikube start -p functional-162000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (43.269125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 status: exit status 7 (74.933917ms)

                                                
                                                
-- stdout --
	functional-162000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-162000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.837375ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-162000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 status -o json: exit status 7 (31.917666ms)

                                                
                                                
-- stdout --
	{"Name":"functional-162000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-162000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (31.747833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-162000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-162000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.222834ms)

                                                
                                                
** stderr ** 
	error: context "functional-162000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-162000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-162000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-162000 describe po hello-node-connect: exit status 1 (26.844542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-162000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-162000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-162000 logs -l app=hello-node-connect: exit status 1 (26.70275ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-162000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-162000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-162000 describe svc hello-node-connect: exit status 1 (27.528791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-162000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (32.131167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-162000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (35.739542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "echo hello": exit status 83 (52.721709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n"*. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "cat /etc/hostname": exit status 83 (46.335792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-162000"- but got *"* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n"*. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (37.553917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.46725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.034208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-162000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-162000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cp functional-162000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd884052271/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 cp functional-162000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd884052271/001/cp-test.txt: exit status 83 (46.925166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 cp functional-162000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd884052271/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.361583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd884052271/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.528667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (58.099875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-162000 ssh -n functional-162000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-162000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-162000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11035/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/test/nested/copy/11035/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/test/nested/copy/11035/hosts": exit status 83 (41.767542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/test/nested/copy/11035/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-162000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-162000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (31.923333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11035.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/11035.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/11035.pem": exit status 83 (46.606666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/11035.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /etc/ssl/certs/11035.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/11035.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11035.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/11035.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/11035.pem": exit status 83 (43.635458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/11035.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /usr/share/ca-certificates/11035.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/11035.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (42.589417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/110352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/110352.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/110352.pem": exit status 83 (42.702083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/110352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /etc/ssl/certs/110352.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/110352.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/110352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/110352.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/110352.pem": exit status 83 (42.734375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/110352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /usr/share/ca-certificates/110352.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/110352.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.157416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-162000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-162000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (31.737917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-162000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-162000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (28.15175ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-162000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-162000 -n functional-162000: exit status 7 (32.361875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-162000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo systemctl is-active crio": exit status 83 (53.326625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0318 03:51:59.218612   11489 out.go:291] Setting OutFile to fd 1 ...
I0318 03:51:59.218742   11489 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:59.218745   11489 out.go:304] Setting ErrFile to fd 2...
I0318 03:51:59.218747   11489 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:51:59.218875   11489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:51:59.219114   11489 mustload.go:65] Loading cluster: functional-162000
I0318 03:51:59.219333   11489 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:51:59.223475   11489 out.go:177] * The control-plane node functional-162000 host is not running: state=Stopped
I0318 03:51:59.235444   11489 out.go:177]   To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
stdout: * The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11488: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-162000": client config: context "functional-162000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-162000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-162000 get svc nginx-svc: exit status 1 (69.6655ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-162000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-162000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-162000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-162000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.553708ms)

                                                
                                                
** stderr ** 
	error: context "functional-162000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-162000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 service list: exit status 83 (45.8ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-162000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 service list -o json: exit status 83 (46.834833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-162000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 service --namespace=default --https --url hello-node: exit status 83 (45.680333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-162000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 service hello-node --url --format={{.IP}}: exit status 83 (43.878042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-162000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 service hello-node --url: exit status 83 (44.647958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-162000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:1565: failed to parse "* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"": parse "* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 version -o=json --components: exit status 83 (43.919959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-162000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-162000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-162000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-162000 image ls --format short --alsologtostderr:
I0318 03:52:52.627981   11781 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:52.628146   11781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.628149   11781 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:52.628152   11781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.628281   11781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:52.628685   11781 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.628741   11781 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-162000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-162000 image ls --format table --alsologtostderr:
I0318 03:52:52.703942   11785 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:52.704097   11785 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.704101   11785 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:52.704104   11785 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.704228   11785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:52.704624   11785 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.704685   11785 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-162000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-162000 image ls --format json --alsologtostderr:
I0318 03:52:52.666076   11783 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:52.666239   11783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.666242   11783 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:52.666245   11783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.666383   11783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:52.666779   11783 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.666837   11783 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-162000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-162000 image ls --format yaml --alsologtostderr:
I0318 03:52:52.590425   11779 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:52.590584   11779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.590587   11779 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:52.590589   11779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.590709   11779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:52.591135   11779 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.591195   11779 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh pgrep buildkitd: exit status 83 (42.923917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image build -t localhost/my-image:functional-162000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-162000 image build -t localhost/my-image:functional-162000 testdata/build --alsologtostderr:
I0318 03:52:52.784936   11789 out.go:291] Setting OutFile to fd 1 ...
I0318 03:52:52.785791   11789 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.785795   11789 out.go:304] Setting ErrFile to fd 2...
I0318 03:52:52.785797   11789 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:52:52.785953   11789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:52:52.786342   11789 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.786795   11789 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:52:52.787027   11789 build_images.go:133] succeeded building to: 
I0318 03:52:52.787031   11789 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
functional_test.go:442: expected "localhost/my-image:functional-162000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (1.281656333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-162000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (1.322830917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-162000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.265302416s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-162000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (1.173519416s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-162000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image save gcr.io/google-containers/addon-resizer:functional-162000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-162000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-162000 docker-env) && out/minikube-darwin-arm64 status -p functional-162000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-162000 docker-env) && out/minikube-darwin-arm64 status -p functional-162000": exit status 1 (46.786208ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2: exit status 83 (43.745375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:52:52.859899   11793 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:52:52.860455   11793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.860459   11793 out.go:304] Setting ErrFile to fd 2...
	I0318 03:52:52.860461   11793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.860606   11793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:52:52.860838   11793 mustload.go:65] Loading cluster: functional-162000
	I0318 03:52:52.861023   11793 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:52:52.865559   11793 out.go:177] * The control-plane node functional-162000 host is not running: state=Stopped
	I0318 03:52:52.869574   11793 out.go:177]   To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2: exit status 83 (43.50575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:52:52.950398   11797 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:52:52.950552   11797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.950556   11797 out.go:304] Setting ErrFile to fd 2...
	I0318 03:52:52.950558   11797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.950703   11797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:52:52.950952   11797 mustload.go:65] Loading cluster: functional-162000
	I0318 03:52:52.951149   11797 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:52:52.954606   11797 out.go:177] * The control-plane node functional-162000 host is not running: state=Stopped
	I0318 03:52:52.958601   11797 out.go:177]   To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2: exit status 83 (45.705125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:52:52.904850   11795 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:52:52.904998   11795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.905001   11795 out.go:304] Setting ErrFile to fd 2...
	I0318 03:52:52.905003   11795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:52.905116   11795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:52:52.905351   11795 mustload.go:65] Loading cluster: functional-162000
	I0318 03:52:52.905541   11795 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:52:52.910648   11795 out.go:177] * The control-plane node functional-162000 host is not running: state=Stopped
	I0318 03:52:52.914528   11795 out.go:177]   To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-162000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-162000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-162000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.032674375s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-948000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-948000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.891206416s)

                                                
                                                
-- stdout --
	* [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-948000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:54:42.411779   11832 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:54:42.411919   11832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:54:42.411923   11832 out.go:304] Setting ErrFile to fd 2...
	I0318 03:54:42.411928   11832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:54:42.412046   11832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:54:42.413123   11832 out.go:298] Setting JSON to false
	I0318 03:54:42.429110   11832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6855,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:54:42.429193   11832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:54:42.434897   11832 out.go:177] * [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:54:42.441864   11832 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:54:42.441912   11832 notify.go:220] Checking for updates...
	I0318 03:54:42.447827   11832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:54:42.455685   11832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:54:42.462838   11832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:54:42.465800   11832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:54:42.471871   11832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:54:42.476068   11832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:54:42.478903   11832 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 03:54:42.486834   11832 start.go:297] selected driver: qemu2
	I0318 03:54:42.486840   11832 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:54:42.486846   11832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:54:42.489319   11832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:54:42.492836   11832 out.go:177] * Automatically selected the socket_vmnet network
	I0318 03:54:42.495925   11832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:54:42.495965   11832 cni.go:84] Creating CNI manager for ""
	I0318 03:54:42.495970   11832 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 03:54:42.495975   11832 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 03:54:42.496024   11832 start.go:340] cluster config:
	{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:54:42.500955   11832 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:54:42.507798   11832 out.go:177] * Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	I0318 03:54:42.511823   11832 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:54:42.511847   11832 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:54:42.511855   11832 cache.go:56] Caching tarball of preloaded images
	I0318 03:54:42.511909   11832 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:54:42.511915   11832 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:54:42.512133   11832 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/ha-948000/config.json ...
	I0318 03:54:42.512157   11832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/ha-948000/config.json: {Name:mk0690c9b8d248073403d0c825ddb0955390c8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:54:42.512407   11832 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:54:42.512447   11832 start.go:364] duration metric: took 33.459µs to acquireMachinesLock for "ha-948000"
	I0318 03:54:42.512460   11832 start.go:93] Provisioning new machine with config: &{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:54:42.512488   11832 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:54:42.520850   11832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 03:54:42.538744   11832 start.go:159] libmachine.API.Create for "ha-948000" (driver="qemu2")
	I0318 03:54:42.538770   11832 client.go:168] LocalClient.Create starting
	I0318 03:54:42.538839   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:54:42.538868   11832 main.go:141] libmachine: Decoding PEM data...
	I0318 03:54:42.538878   11832 main.go:141] libmachine: Parsing certificate...
	I0318 03:54:42.538928   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:54:42.538950   11832 main.go:141] libmachine: Decoding PEM data...
	I0318 03:54:42.538958   11832 main.go:141] libmachine: Parsing certificate...
	I0318 03:54:42.539305   11832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:54:42.672711   11832 main.go:141] libmachine: Creating SSH key...
	I0318 03:54:42.720030   11832 main.go:141] libmachine: Creating Disk image...
	I0318 03:54:42.720035   11832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:54:42.720204   11832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:42.732277   11832 main.go:141] libmachine: STDOUT: 
	I0318 03:54:42.732298   11832 main.go:141] libmachine: STDERR: 
	I0318 03:54:42.732356   11832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2 +20000M
	I0318 03:54:42.743329   11832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:54:42.743349   11832 main.go:141] libmachine: STDERR: 
	I0318 03:54:42.743367   11832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:42.743373   11832 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:54:42.743407   11832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:ac:c3:c3:18:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:42.745131   11832 main.go:141] libmachine: STDOUT: 
	I0318 03:54:42.745147   11832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:54:42.745166   11832 client.go:171] duration metric: took 206.397042ms to LocalClient.Create
	I0318 03:54:44.745866   11832 start.go:128] duration metric: took 2.233415541s to createHost
	I0318 03:54:44.745924   11832 start.go:83] releasing machines lock for "ha-948000", held for 2.233532417s
	W0318 03:54:44.746058   11832 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:54:44.757272   11832 out.go:177] * Deleting "ha-948000" in qemu2 ...
	W0318 03:54:44.784957   11832 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:54:44.784987   11832 start.go:728] Will try again in 5 seconds ...
	I0318 03:54:49.786151   11832 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:54:49.786624   11832 start.go:364] duration metric: took 356.583µs to acquireMachinesLock for "ha-948000"
	I0318 03:54:49.786735   11832 start.go:93] Provisioning new machine with config: &{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:54:49.787028   11832 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:54:49.794793   11832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 03:54:49.846185   11832 start.go:159] libmachine.API.Create for "ha-948000" (driver="qemu2")
	I0318 03:54:49.846244   11832 client.go:168] LocalClient.Create starting
	I0318 03:54:49.846349   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:54:49.846408   11832 main.go:141] libmachine: Decoding PEM data...
	I0318 03:54:49.846421   11832 main.go:141] libmachine: Parsing certificate...
	I0318 03:54:49.846495   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:54:49.846543   11832 main.go:141] libmachine: Decoding PEM data...
	I0318 03:54:49.846553   11832 main.go:141] libmachine: Parsing certificate...
	I0318 03:54:49.847059   11832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:54:49.997777   11832 main.go:141] libmachine: Creating SSH key...
	I0318 03:54:50.196642   11832 main.go:141] libmachine: Creating Disk image...
	I0318 03:54:50.196650   11832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:54:50.196854   11832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:50.209778   11832 main.go:141] libmachine: STDOUT: 
	I0318 03:54:50.209802   11832 main.go:141] libmachine: STDERR: 
	I0318 03:54:50.209870   11832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2 +20000M
	I0318 03:54:50.220758   11832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:54:50.220772   11832 main.go:141] libmachine: STDERR: 
	I0318 03:54:50.220791   11832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:50.220797   11832 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:54:50.220829   11832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d4:dd:65:17:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:54:50.222522   11832 main.go:141] libmachine: STDOUT: 
	I0318 03:54:50.222537   11832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:54:50.222553   11832 client.go:171] duration metric: took 376.3155ms to LocalClient.Create
	I0318 03:54:52.224774   11832 start.go:128] duration metric: took 2.437772625s to createHost
	I0318 03:54:52.224870   11832 start.go:83] releasing machines lock for "ha-948000", held for 2.438292042s
	W0318 03:54:52.225208   11832 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:54:52.239826   11832 out.go:177] 
	W0318 03:54:52.244920   11832 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:54:52.244949   11832 out.go:239] * 
	* 
	W0318 03:54:52.247358   11832 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:54:52.258636   11832 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-948000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (69.554666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.101292ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-948000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- rollout status deployment/busybox: exit status 1 (58.45475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.52375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.397041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.723833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.424917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.155541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.200875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.057209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.858625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.267167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.148625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.788084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.142458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.833333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.261834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.802792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.002416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-948000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.928916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-948000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.1305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-948000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-948000 -v=7 --alsologtostderr: exit status 83 (45.172042ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-948000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:46.927862   11922 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:46.928243   11922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:46.928247   11922 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:46.928249   11922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:46.928433   11922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:46.928666   11922 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:46.928877   11922 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:46.932807   11922 out.go:177] * The control-plane node ha-948000 host is not running: state=Stopped
	I0318 03:56:46.936754   11922 out.go:177]   To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-948000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.410625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-948000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-948000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.34775ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-948000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-948000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-948000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.709708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-948000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-948000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.594833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status --output json -v=7 --alsologtostderr: exit status 7 (32.186708ms)

                                                
                                                
-- stdout --
	{"Name":"ha-948000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:47.172721   11935 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:47.172874   11935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.172877   11935 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:47.172879   11935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.172999   11935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:47.173135   11935 out.go:298] Setting JSON to true
	I0318 03:56:47.173147   11935 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:47.173205   11935 notify.go:220] Checking for updates...
	I0318 03:56:47.173328   11935 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:47.173336   11935 status.go:255] checking status of ha-948000 ...
	I0318 03:56:47.173551   11935 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:47.173555   11935 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:47.173557   11935 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-948000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.010458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.925542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:47.237484   11939 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:47.237914   11939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.237918   11939 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:47.237920   11939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.238075   11939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:47.238314   11939 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:47.238502   11939 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:47.243338   11939 out.go:177] 
	W0318 03:56:47.246295   11939 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 03:56:47.246300   11939 out.go:239] * 
	* 
	W0318 03:56:47.248457   11939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:56:47.253229   11939 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-948000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (32.243042ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:47.288619   11941 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:47.288792   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.288795   11941 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:47.288798   11941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.288921   11941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:47.289052   11941 out.go:298] Setting JSON to false
	I0318 03:56:47.289064   11941 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:47.289121   11941 notify.go:220] Checking for updates...
	I0318 03:56:47.289257   11941 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:47.289263   11941 status.go:255] checking status of ha-948000 ...
	I0318 03:56:47.289497   11941 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:47.289501   11941 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:47.289503   11941 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.2915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-948000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (31.984625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 node start m02 -v=7 --alsologtostderr: exit status 85 (53.3545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:47.459484   11951 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:47.459869   11951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.459873   11951 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:47.459876   11951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.460051   11951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:47.460283   11951 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:47.460447   11951 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:47.464716   11951 out.go:177] 
	W0318 03:56:47.467872   11951 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 03:56:47.467876   11951 out.go:239] * 
	* 
	W0318 03:56:47.469819   11951 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:56:47.473680   11951 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0318 03:56:47.459484   11951 out.go:291] Setting OutFile to fd 1 ...
I0318 03:56:47.459869   11951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:56:47.459873   11951 out.go:304] Setting ErrFile to fd 2...
I0318 03:56:47.459876   11951 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 03:56:47.460051   11951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 03:56:47.460283   11951 mustload.go:65] Loading cluster: ha-948000
I0318 03:56:47.460447   11951 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 03:56:47.464716   11951 out.go:177] 
W0318 03:56:47.467872   11951 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0318 03:56:47.467876   11951 out.go:239] * 
* 
W0318 03:56:47.469819   11951 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 03:56:47.473680   11951 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-948000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (31.967416ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:47.512956   11953 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:47.513091   11953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.513094   11953 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:47.513097   11953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:47.513225   11953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:47.513343   11953 out.go:298] Setting JSON to false
	I0318 03:56:47.513356   11953 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:47.513407   11953 notify.go:220] Checking for updates...
	I0318 03:56:47.513550   11953 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:47.513557   11953 status.go:255] checking status of ha-948000 ...
	I0318 03:56:47.513758   11953 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:47.513764   11953 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:47.513766   11953 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (76.696875ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:48.306949   11955 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:48.307159   11955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:48.307164   11955 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:48.307167   11955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:48.307327   11955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:48.307491   11955 out.go:298] Setting JSON to false
	I0318 03:56:48.307509   11955 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:48.307549   11955 notify.go:220] Checking for updates...
	I0318 03:56:48.307751   11955 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:48.307759   11955 status.go:255] checking status of ha-948000 ...
	I0318 03:56:48.307998   11955 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:48.308003   11955 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:48.308006   11955 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (77.595375ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:49.839368   11957 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:49.839547   11957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:49.839551   11957 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:49.839555   11957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:49.839994   11957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:49.840204   11957 out.go:298] Setting JSON to false
	I0318 03:56:49.840220   11957 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:49.840488   11957 notify.go:220] Checking for updates...
	I0318 03:56:49.840727   11957 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:49.840753   11957 status.go:255] checking status of ha-948000 ...
	I0318 03:56:49.841196   11957 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:49.841201   11957 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:49.841204   11957 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (76.463292ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:51.381381   11959 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:51.381549   11959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:51.381558   11959 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:51.381560   11959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:51.381708   11959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:51.381887   11959 out.go:298] Setting JSON to false
	I0318 03:56:51.381902   11959 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:51.381932   11959 notify.go:220] Checking for updates...
	I0318 03:56:51.382174   11959 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:51.382182   11959 status.go:255] checking status of ha-948000 ...
	I0318 03:56:51.382450   11959 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:51.382455   11959 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:51.382458   11959 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (76.410208ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:54.503069   11961 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:54.503241   11961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:54.503245   11961 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:54.503248   11961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:54.503391   11961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:54.503529   11961 out.go:298] Setting JSON to false
	I0318 03:56:54.503544   11961 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:54.503573   11961 notify.go:220] Checking for updates...
	I0318 03:56:54.503767   11961 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:54.503775   11961 status.go:255] checking status of ha-948000 ...
	I0318 03:56:54.504040   11961 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:54.504044   11961 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:54.504047   11961 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (77.643166ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:56:58.962400   11963 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:56:58.962577   11963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:58.962582   11963 out.go:304] Setting ErrFile to fd 2...
	I0318 03:56:58.962584   11963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:56:58.962767   11963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:56:58.962926   11963 out.go:298] Setting JSON to false
	I0318 03:56:58.962941   11963 mustload.go:65] Loading cluster: ha-948000
	I0318 03:56:58.962977   11963 notify.go:220] Checking for updates...
	I0318 03:56:58.963201   11963 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:56:58.963209   11963 status.go:255] checking status of ha-948000 ...
	I0318 03:56:58.963516   11963 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:56:58.963521   11963 status.go:343] host is not running, skipping remaining checks
	I0318 03:56:58.963524   11963 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (74.750375ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:03.076700   11968 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:03.076892   11968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:03.076897   11968 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:03.076899   11968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:03.077065   11968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:03.077221   11968 out.go:298] Setting JSON to false
	I0318 03:57:03.077240   11968 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:03.077276   11968 notify.go:220] Checking for updates...
	I0318 03:57:03.077504   11968 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:03.077513   11968 status.go:255] checking status of ha-948000 ...
	I0318 03:57:03.077808   11968 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:57:03.077813   11968 status.go:343] host is not running, skipping remaining checks
	I0318 03:57:03.077816   11968 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (76.477375ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:09.450597   11970 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:09.450776   11970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:09.450780   11970 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:09.450784   11970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:09.450962   11970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:09.451108   11970 out.go:298] Setting JSON to false
	I0318 03:57:09.451123   11970 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:09.451150   11970 notify.go:220] Checking for updates...
	I0318 03:57:09.451400   11970 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:09.451412   11970 status.go:255] checking status of ha-948000 ...
	I0318 03:57:09.451678   11970 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:57:09.451682   11970 status.go:343] host is not running, skipping remaining checks
	I0318 03:57:09.451685   11970 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (76.802083ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:24.997786   11975 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:24.997948   11975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:24.997952   11975 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:24.997956   11975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:24.998136   11975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:24.998301   11975 out.go:298] Setting JSON to false
	I0318 03:57:24.998317   11975 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:24.998363   11975 notify.go:220] Checking for updates...
	I0318 03:57:24.998573   11975 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:24.998580   11975 status.go:255] checking status of ha-948000 ...
	I0318 03:57:24.998833   11975 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:57:24.998838   11975 status.go:343] host is not running, skipping remaining checks
	I0318 03:57:24.998841   11975 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (34.077416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (37.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-948000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-948000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.119625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-948000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-948000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-948000 -v=7 --alsologtostderr: (3.298975125s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-948000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-948000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231755458s)

                                                
                                                
-- stdout --
	* [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	* Restarting existing qemu2 VM for "ha-948000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-948000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:28.536099   12005 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:28.536261   12005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:28.536266   12005 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:28.536269   12005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:28.536412   12005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:28.537593   12005 out.go:298] Setting JSON to false
	I0318 03:57:28.556330   12005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7021,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:57:28.556384   12005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:57:28.561643   12005 out.go:177] * [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:57:28.569426   12005 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:57:28.573510   12005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:57:28.569455   12005 notify.go:220] Checking for updates...
	I0318 03:57:28.577698   12005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:57:28.580545   12005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:57:28.583520   12005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:57:28.586591   12005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:57:28.589838   12005 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:28.589890   12005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:57:28.594543   12005 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 03:57:28.601522   12005 start.go:297] selected driver: qemu2
	I0318 03:57:28.601528   12005 start.go:901] validating driver "qemu2" against &{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:57:28.601582   12005 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:57:28.604002   12005 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:57:28.604052   12005 cni.go:84] Creating CNI manager for ""
	I0318 03:57:28.604058   12005 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 03:57:28.604108   12005 start.go:340] cluster config:
	{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:57:28.608719   12005 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:57:28.616531   12005 out.go:177] * Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	I0318 03:57:28.620519   12005 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:57:28.620536   12005 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:57:28.620559   12005 cache.go:56] Caching tarball of preloaded images
	I0318 03:57:28.620629   12005 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:57:28.620637   12005 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:57:28.620714   12005 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/ha-948000/config.json ...
	I0318 03:57:28.621234   12005 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:57:28.621271   12005 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "ha-948000"
	I0318 03:57:28.621282   12005 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:57:28.621289   12005 fix.go:54] fixHost starting: 
	I0318 03:57:28.621424   12005 fix.go:112] recreateIfNeeded on ha-948000: state=Stopped err=<nil>
	W0318 03:57:28.621434   12005 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:57:28.625461   12005 out.go:177] * Restarting existing qemu2 VM for "ha-948000" ...
	I0318 03:57:28.633571   12005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d4:dd:65:17:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:57:28.635747   12005 main.go:141] libmachine: STDOUT: 
	I0318 03:57:28.635771   12005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:57:28.635804   12005 fix.go:56] duration metric: took 14.515333ms for fixHost
	I0318 03:57:28.635810   12005 start.go:83] releasing machines lock for "ha-948000", held for 14.535125ms
	W0318 03:57:28.635818   12005 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:57:28.635851   12005 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:57:28.635856   12005 start.go:728] Will try again in 5 seconds ...
	I0318 03:57:33.637922   12005 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:57:33.638281   12005 start.go:364] duration metric: took 269.541µs to acquireMachinesLock for "ha-948000"
	I0318 03:57:33.638410   12005 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:57:33.638430   12005 fix.go:54] fixHost starting: 
	I0318 03:57:33.639101   12005 fix.go:112] recreateIfNeeded on ha-948000: state=Stopped err=<nil>
	W0318 03:57:33.639126   12005 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:57:33.647588   12005 out.go:177] * Restarting existing qemu2 VM for "ha-948000" ...
	I0318 03:57:33.651632   12005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d4:dd:65:17:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:57:33.661358   12005 main.go:141] libmachine: STDOUT: 
	I0318 03:57:33.661440   12005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:57:33.661518   12005 fix.go:56] duration metric: took 23.089958ms for fixHost
	I0318 03:57:33.661538   12005 start.go:83] releasing machines lock for "ha-948000", held for 23.238625ms
	W0318 03:57:33.661687   12005 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:57:33.670415   12005 out.go:177] 
	W0318 03:57:33.674582   12005 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:57:33.674611   12005 out.go:239] * 
	* 
	W0318 03:57:33.677323   12005 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:57:33.686476   12005 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-948000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-948000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (33.969125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.281667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-948000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:33.836997   12017 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:33.837429   12017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:33.837433   12017 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:33.837436   12017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:33.837606   12017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:33.837852   12017 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:33.838038   12017 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:33.842092   12017 out.go:177] * The control-plane node ha-948000 host is not running: state=Stopped
	I0318 03:57:33.846110   12017 out.go:177]   To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-948000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (32.251917ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:33.881413   12019 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:33.881556   12019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:33.881559   12019 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:33.881562   12019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:33.881700   12019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:33.881828   12019 out.go:298] Setting JSON to false
	I0318 03:57:33.881840   12019 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:33.881894   12019 notify.go:220] Checking for updates...
	I0318 03:57:33.882034   12019 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:33.882041   12019 status.go:255] checking status of ha-948000 ...
	I0318 03:57:33.882249   12019 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:57:33.882253   12019 status.go:343] host is not running, skipping remaining checks
	I0318 03:57:33.882255   12019 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.392458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-948000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.471166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-948000 stop -v=7 --alsologtostderr: (2.015024583s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr: exit status 7 (72.656875ms)

                                                
                                                
-- stdout --
	ha-948000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:36.110006   12041 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:36.110176   12041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:36.110180   12041 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:36.110183   12041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:36.110340   12041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:36.110486   12041 out.go:298] Setting JSON to false
	I0318 03:57:36.110500   12041 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:36.110526   12041 notify.go:220] Checking for updates...
	I0318 03:57:36.110753   12041 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:36.110761   12041 status.go:255] checking status of ha-948000 ...
	I0318 03:57:36.111010   12041 status.go:330] ha-948000 host status = "Stopped" (err=<nil>)
	I0318 03:57:36.111014   12041 status.go:343] host is not running, skipping remaining checks
	I0318 03:57:36.111017   12041 status.go:257] ha-948000 status: &{Name:ha-948000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-948000 status -v=7 --alsologtostderr": ha-948000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (34.179167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-948000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-948000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.193743875s)

                                                
                                                
-- stdout --
	* [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	* Restarting existing qemu2 VM for "ha-948000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-948000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:36.176658   12045 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:36.176781   12045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:36.176785   12045 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:36.176787   12045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:36.176914   12045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:36.177919   12045 out.go:298] Setting JSON to false
	I0318 03:57:36.193883   12045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7029,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:57:36.193944   12045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:57:36.199056   12045 out.go:177] * [ha-948000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:57:36.205975   12045 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:57:36.209992   12045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:57:36.206031   12045 notify.go:220] Checking for updates...
	I0318 03:57:36.216987   12045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:57:36.221031   12045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:57:36.227956   12045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:57:36.231984   12045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:57:36.235243   12045 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:36.235525   12045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:57:36.240006   12045 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 03:57:36.246000   12045 start.go:297] selected driver: qemu2
	I0318 03:57:36.246007   12045 start.go:901] validating driver "qemu2" against &{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:57:36.246078   12045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:57:36.248320   12045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:57:36.248370   12045 cni.go:84] Creating CNI manager for ""
	I0318 03:57:36.248375   12045 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 03:57:36.248414   12045 start.go:340] cluster config:
	{Name:ha-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-948000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:57:36.252790   12045 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:57:36.259981   12045 out.go:177] * Starting "ha-948000" primary control-plane node in "ha-948000" cluster
	I0318 03:57:36.263939   12045 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:57:36.263955   12045 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:57:36.263966   12045 cache.go:56] Caching tarball of preloaded images
	I0318 03:57:36.264021   12045 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:57:36.264027   12045 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:57:36.264080   12045 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/ha-948000/config.json ...
	I0318 03:57:36.264556   12045 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:57:36.264584   12045 start.go:364] duration metric: took 21.833µs to acquireMachinesLock for "ha-948000"
	I0318 03:57:36.264593   12045 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:57:36.264598   12045 fix.go:54] fixHost starting: 
	I0318 03:57:36.264720   12045 fix.go:112] recreateIfNeeded on ha-948000: state=Stopped err=<nil>
	W0318 03:57:36.264729   12045 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:57:36.269021   12045 out.go:177] * Restarting existing qemu2 VM for "ha-948000" ...
	I0318 03:57:36.276978   12045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d4:dd:65:17:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:57:36.279063   12045 main.go:141] libmachine: STDOUT: 
	I0318 03:57:36.279084   12045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:57:36.279113   12045 fix.go:56] duration metric: took 14.514458ms for fixHost
	I0318 03:57:36.279117   12045 start.go:83] releasing machines lock for "ha-948000", held for 14.529709ms
	W0318 03:57:36.279125   12045 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:57:36.279159   12045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:57:36.279165   12045 start.go:728] Will try again in 5 seconds ...
	I0318 03:57:41.280137   12045 start.go:360] acquireMachinesLock for ha-948000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:57:41.280559   12045 start.go:364] duration metric: took 289.334µs to acquireMachinesLock for "ha-948000"
	I0318 03:57:41.280677   12045 start.go:96] Skipping create...Using existing machine configuration
	I0318 03:57:41.280696   12045 fix.go:54] fixHost starting: 
	I0318 03:57:41.281391   12045 fix.go:112] recreateIfNeeded on ha-948000: state=Stopped err=<nil>
	W0318 03:57:41.281418   12045 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 03:57:41.289870   12045 out.go:177] * Restarting existing qemu2 VM for "ha-948000" ...
	I0318 03:57:41.293011   12045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d4:dd:65:17:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/ha-948000/disk.qcow2
	I0318 03:57:41.302619   12045 main.go:141] libmachine: STDOUT: 
	I0318 03:57:41.302702   12045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:57:41.302779   12045 fix.go:56] duration metric: took 22.083208ms for fixHost
	I0318 03:57:41.302796   12045 start.go:83] releasing machines lock for "ha-948000", held for 22.20625ms
	W0318 03:57:41.303050   12045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-948000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:57:41.310880   12045 out.go:177] 
	W0318 03:57:41.314949   12045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:57:41.315004   12045 out.go:239] * 
	* 
	W0318 03:57:41.317793   12045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:57:41.325837   12045 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-948000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (70.351209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-948000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.564083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-948000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-948000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.831042ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-948000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:57:41.550729   12064 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:57:41.550884   12064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:41.550887   12064 out.go:304] Setting ErrFile to fd 2...
	I0318 03:57:41.550890   12064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:57:41.551007   12064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:57:41.551261   12064 mustload.go:65] Loading cluster: ha-948000
	I0318 03:57:41.551443   12064 config.go:182] Loaded profile config "ha-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:57:41.554988   12064 out.go:177] * The control-plane node ha-948000 host is not running: state=Stopped
	I0318 03:57:41.558967   12064 out.go:177]   To start a cluster, run: "minikube start -p ha-948000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-948000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (31.767125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-948000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-948000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-948000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-948000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-948000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-948000 -n ha-948000: exit status 7 (32.547833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-533000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-533000 --driver=qemu2 : exit status 80 (9.820445583s)

                                                
                                                
-- stdout --
	* [image-533000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-533000" primary control-plane node in "image-533000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-533000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-533000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-533000 -n image-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-533000 -n image-533000: exit status 7 (73.545583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-165000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-165000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.714823791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5fb7c28f-2faf-4ca2-bc01-bb6a1702ce66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-165000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92fa8f1a-bf94-4f37-84c6-dda878cdf30f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18431"}}
	{"specversion":"1.0","id":"e3fa795f-d10b-4240-a97e-639fe7b6dc0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig"}}
	{"specversion":"1.0","id":"4a1cadc9-91e3-4aa9-8fc5-f5ed7617765d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"32f5726d-b630-4a54-a5d1-feaede845d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd10e139-1e49-4676-bb8a-2570badfb16c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube"}}
	{"specversion":"1.0","id":"087e6fae-58d9-4f25-8b34-b7c1e59cd415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aa2fab51-7245-4352-a869-40bca1314c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"34c070ca-dfea-4b06-82f3-ffdf534398e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d4a7fa91-471e-4c58-b7e7-c97b08c0705f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-165000\" primary control-plane node in \"json-output-165000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7795da5a-a78f-4c92-afd7-65208b2d3c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f4302edb-6b0d-4572-828c-d578e010f8f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-165000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e332ec78-d683-46d0-b7f8-5c2a1608f936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7a23375e-c405-4d1e-9f75-e4cc5658182f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cd2fec48-93e0-4b21-a708-99923d7c5218","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-165000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"8871125d-5e38-4e9e-b60e-b1eb6645ed7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a1c67cb8-52e7-48f0-bf68-ef4036263067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-165000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-165000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-165000 --output=json --user=testUser: exit status 83 (78.88575ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca9b7cc9-22f4-4854-ac07-e09cb553d609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-165000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"ee892b81-335e-4f22-b9a4-5b2708be6366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-165000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-165000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-165000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-165000 --output=json --user=testUser: exit status 83 (47.334291ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-165000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-165000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-165000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-165000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-304000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-304000 --driver=qemu2 : exit status 80 (9.801064209s)

                                                
                                                
-- stdout --
	* [first-304000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-304000" primary control-plane node in "first-304000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-304000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-304000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 03:58:15.583687 -0700 PDT m=+525.969551584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-305000 -n second-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-305000 -n second-305000: exit status 85 (81.297583ms)

                                                
                                                
-- stdout --
	* Profile "second-305000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-305000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-305000" host is not running, skipping log retrieval (state="* Profile \"second-305000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-305000\"")
helpers_test.go:175: Cleaning up "second-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-305000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 03:58:15.89596 -0700 PDT m=+526.281831917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-304000 -n first-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-304000 -n first-304000: exit status 7 (32.082958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-304000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-304000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-304000
--- FAIL: TestMinikubeProfile (10.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-074000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-074000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.553793708s)

                                                
                                                
-- stdout --
	* [mount-start-1-074000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-074000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-074000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-074000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-074000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-074000 -n mount-start-1-074000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-074000 -n mount-start-1-074000: exit status 7 (69.680166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-074000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-886000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-886000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.8614095s)

                                                
                                                
-- stdout --
	* [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-886000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:58:27.020078   12230 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:58:27.020216   12230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:58:27.020220   12230 out.go:304] Setting ErrFile to fd 2...
	I0318 03:58:27.020222   12230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:58:27.020363   12230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:58:27.021402   12230 out.go:298] Setting JSON to false
	I0318 03:58:27.037386   12230 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7080,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:58:27.037439   12230 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:58:27.043832   12230 out.go:177] * [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:58:27.050743   12230 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:58:27.054758   12230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:58:27.050795   12230 notify.go:220] Checking for updates...
	I0318 03:58:27.060695   12230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:58:27.063798   12230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:58:27.066656   12230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:58:27.069707   12230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:58:27.072931   12230 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:58:27.077693   12230 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 03:58:27.084711   12230 start.go:297] selected driver: qemu2
	I0318 03:58:27.084716   12230 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:58:27.084721   12230 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:58:27.086925   12230 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:58:27.089745   12230 out.go:177] * Automatically selected the socket_vmnet network
	I0318 03:58:27.096793   12230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 03:58:27.096830   12230 cni.go:84] Creating CNI manager for ""
	I0318 03:58:27.096835   12230 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 03:58:27.096839   12230 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 03:58:27.096873   12230 start.go:340] cluster config:
	{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:58:27.101634   12230 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:58:27.109723   12230 out.go:177] * Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	I0318 03:58:27.113719   12230 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:58:27.113735   12230 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:58:27.113746   12230 cache.go:56] Caching tarball of preloaded images
	I0318 03:58:27.113817   12230 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 03:58:27.113831   12230 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 03:58:27.114115   12230 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/multinode-886000/config.json ...
	I0318 03:58:27.114126   12230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/multinode-886000/config.json: {Name:mkae799eaf945df287c9aebffecefeef4d547215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:58:27.114354   12230 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:58:27.114394   12230 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "multinode-886000"
	I0318 03:58:27.114408   12230 start.go:93] Provisioning new machine with config: &{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:58:27.114442   12230 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:58:27.118783   12230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 03:58:27.137288   12230 start.go:159] libmachine.API.Create for "multinode-886000" (driver="qemu2")
	I0318 03:58:27.137321   12230 client.go:168] LocalClient.Create starting
	I0318 03:58:27.137407   12230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:58:27.137439   12230 main.go:141] libmachine: Decoding PEM data...
	I0318 03:58:27.137454   12230 main.go:141] libmachine: Parsing certificate...
	I0318 03:58:27.137499   12230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:58:27.137524   12230 main.go:141] libmachine: Decoding PEM data...
	I0318 03:58:27.137532   12230 main.go:141] libmachine: Parsing certificate...
	I0318 03:58:27.138003   12230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:58:27.276587   12230 main.go:141] libmachine: Creating SSH key...
	I0318 03:58:27.348738   12230 main.go:141] libmachine: Creating Disk image...
	I0318 03:58:27.348743   12230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:58:27.348929   12230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:27.361624   12230 main.go:141] libmachine: STDOUT: 
	I0318 03:58:27.361661   12230 main.go:141] libmachine: STDERR: 
	I0318 03:58:27.361712   12230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2 +20000M
	I0318 03:58:27.372868   12230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:58:27.372889   12230 main.go:141] libmachine: STDERR: 
	I0318 03:58:27.372901   12230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:27.372906   12230 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:58:27.372938   12230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:0a:7e:f4:68:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:27.374911   12230 main.go:141] libmachine: STDOUT: 
	I0318 03:58:27.374929   12230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:58:27.374947   12230 client.go:171] duration metric: took 237.624542ms to LocalClient.Create
	I0318 03:58:29.377152   12230 start.go:128] duration metric: took 2.262736334s to createHost
	I0318 03:58:29.377220   12230 start.go:83] releasing machines lock for "multinode-886000", held for 2.262873084s
	W0318 03:58:29.377353   12230 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:58:29.387293   12230 out.go:177] * Deleting "multinode-886000" in qemu2 ...
	W0318 03:58:29.415525   12230 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:58:29.415554   12230 start.go:728] Will try again in 5 seconds ...
	I0318 03:58:34.417728   12230 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 03:58:34.418101   12230 start.go:364] duration metric: took 290.5µs to acquireMachinesLock for "multinode-886000"
	I0318 03:58:34.418216   12230 start.go:93] Provisioning new machine with config: &{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 03:58:34.418541   12230 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 03:58:34.427201   12230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 03:58:34.476313   12230 start.go:159] libmachine.API.Create for "multinode-886000" (driver="qemu2")
	I0318 03:58:34.476366   12230 client.go:168] LocalClient.Create starting
	I0318 03:58:34.476462   12230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 03:58:34.476516   12230 main.go:141] libmachine: Decoding PEM data...
	I0318 03:58:34.476535   12230 main.go:141] libmachine: Parsing certificate...
	I0318 03:58:34.476604   12230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 03:58:34.476646   12230 main.go:141] libmachine: Decoding PEM data...
	I0318 03:58:34.476656   12230 main.go:141] libmachine: Parsing certificate...
	I0318 03:58:34.477146   12230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 03:58:34.627573   12230 main.go:141] libmachine: Creating SSH key...
	I0318 03:58:34.780331   12230 main.go:141] libmachine: Creating Disk image...
	I0318 03:58:34.780341   12230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 03:58:34.780534   12230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:34.793165   12230 main.go:141] libmachine: STDOUT: 
	I0318 03:58:34.793189   12230 main.go:141] libmachine: STDERR: 
	I0318 03:58:34.793244   12230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2 +20000M
	I0318 03:58:34.804049   12230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 03:58:34.804064   12230 main.go:141] libmachine: STDERR: 
	I0318 03:58:34.804075   12230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:34.804079   12230 main.go:141] libmachine: Starting QEMU VM...
	I0318 03:58:34.804119   12230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ef:aa:6e:14:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 03:58:34.805786   12230 main.go:141] libmachine: STDOUT: 
	I0318 03:58:34.805803   12230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 03:58:34.805817   12230 client.go:171] duration metric: took 329.45475ms to LocalClient.Create
	I0318 03:58:36.808156   12230 start.go:128] duration metric: took 2.389565375s to createHost
	I0318 03:58:36.808290   12230 start.go:83] releasing machines lock for "multinode-886000", held for 2.390225167s
	W0318 03:58:36.808676   12230 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 03:58:36.822346   12230 out.go:177] 
	W0318 03:58:36.826513   12230 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 03:58:36.826548   12230 out.go:239] * 
	* 
	W0318 03:58:36.829319   12230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:58:36.836259   12230 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-886000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (69.307542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (104.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.795ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-886000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- rollout status deployment/busybox: exit status 1 (57.859417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.193084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.219417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.119208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.311625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.218583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.957667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.887ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.248667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.819666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.173125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.236916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.609167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.500333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.090917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.636917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (104.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-886000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.455458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.470916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-886000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-886000 -v 3 --alsologtostderr: exit status 83 (42.804292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-886000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:21.518563   12327 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:21.518714   12327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.518717   12327 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:21.518719   12327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.518842   12327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:21.519091   12327 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:21.519264   12327 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:21.524176   12327 out.go:177] * The control-plane node multinode-886000 host is not running: state=Stopped
	I0318 04:00:21.527152   12327 out.go:177]   To start a cluster, run: "minikube start -p multinode-886000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-886000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-886000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-886000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.452167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-886000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-886000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-886000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.792875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-886000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-886000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-886000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-886000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.005375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status --output json --alsologtostderr: exit status 7 (32.508416ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-886000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:21.760926   12340 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:21.761071   12340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.761074   12340 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:21.761077   12340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.761189   12340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:21.761331   12340 out.go:298] Setting JSON to true
	I0318 04:00:21.761345   12340 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:21.761394   12340 notify.go:220] Checking for updates...
	I0318 04:00:21.761563   12340 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:21.761571   12340 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:21.761778   12340 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:21.761782   12340 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:21.761784   12340 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-886000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.433542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 node stop m03: exit status 85 (52.86525ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-886000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status: exit status 7 (32.154375ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr: exit status 7 (32.255708ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:21.911560   12348 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:21.911740   12348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.911751   12348 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:21.911753   12348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.911877   12348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:21.912012   12348 out.go:298] Setting JSON to false
	I0318 04:00:21.912026   12348 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:21.912095   12348 notify.go:220] Checking for updates...
	I0318 04:00:21.912203   12348 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:21.912210   12348 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:21.912410   12348 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:21.912414   12348 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:21.912416   12348 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr": multinode-886000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.150166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:21.975989   12352 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:21.976393   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.976398   12352 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:21.976400   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:21.976566   12352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:21.976794   12352 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:21.977001   12352 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:21.981693   12352 out.go:177] 
	W0318 04:00:21.985714   12352 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0318 04:00:21.985718   12352 out.go:239] * 
	* 
	W0318 04:00:21.987599   12352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:00:21.991629   12352 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0318 04:00:21.975989   12352 out.go:291] Setting OutFile to fd 1 ...
I0318 04:00:21.976393   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:00:21.976398   12352 out.go:304] Setting ErrFile to fd 2...
I0318 04:00:21.976400   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:00:21.976566   12352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
I0318 04:00:21.976794   12352 mustload.go:65] Loading cluster: multinode-886000
I0318 04:00:21.977001   12352 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:00:21.981693   12352 out.go:177] 
W0318 04:00:21.985714   12352 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0318 04:00:21.985718   12352 out.go:239] * 
* 
W0318 04:00:21.987599   12352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:00:21.991629   12352 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-886000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (32.3185ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:22.027367   12354 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:22.027515   12354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:22.027518   12354 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:22.027520   12354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:22.027644   12354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:22.027792   12354 out.go:298] Setting JSON to false
	I0318 04:00:22.027804   12354 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:22.027851   12354 notify.go:220] Checking for updates...
	I0318 04:00:22.028008   12354 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:22.028017   12354 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:22.028223   12354 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:22.028227   12354 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:22.028230   12354 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (78.695208ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:23.466536   12356 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:23.466717   12356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:23.466722   12356 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:23.466725   12356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:23.466894   12356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:23.467049   12356 out.go:298] Setting JSON to false
	I0318 04:00:23.467065   12356 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:23.467104   12356 notify.go:220] Checking for updates...
	I0318 04:00:23.467383   12356 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:23.467392   12356 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:23.467665   12356 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:23.467670   12356 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:23.467673   12356 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (79.096042ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:25.249152   12358 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:25.249342   12358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:25.249346   12358 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:25.249349   12358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:25.249503   12358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:25.249672   12358 out.go:298] Setting JSON to false
	I0318 04:00:25.249687   12358 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:25.249722   12358 notify.go:220] Checking for updates...
	I0318 04:00:25.249939   12358 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:25.249948   12358 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:25.250234   12358 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:25.250239   12358 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:25.250242   12358 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (77.295ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:27.082251   12360 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:27.082440   12360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:27.082445   12360 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:27.082447   12360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:27.082616   12360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:27.082772   12360 out.go:298] Setting JSON to false
	I0318 04:00:27.082787   12360 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:27.082830   12360 notify.go:220] Checking for updates...
	I0318 04:00:27.083050   12360 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:27.083061   12360 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:27.083317   12360 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:27.083321   12360 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:27.083325   12360 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (75.417792ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:30.295220   12365 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:30.295454   12365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:30.295459   12365 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:30.295462   12365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:30.295674   12365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:30.295886   12365 out.go:298] Setting JSON to false
	I0318 04:00:30.295901   12365 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:30.295940   12365 notify.go:220] Checking for updates...
	I0318 04:00:30.296170   12365 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:30.296178   12365 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:30.296457   12365 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:30.296462   12365 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:30.296465   12365 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (74.23075ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:35.476241   12367 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:35.476394   12367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:35.476398   12367 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:35.476401   12367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:35.476552   12367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:35.476713   12367 out.go:298] Setting JSON to false
	I0318 04:00:35.476729   12367 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:35.476768   12367 notify.go:220] Checking for updates...
	I0318 04:00:35.476998   12367 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:35.477007   12367 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:35.477287   12367 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:35.477292   12367 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:35.477296   12367 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (78.150292ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:42.554421   12372 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:42.554889   12372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:42.554895   12372 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:42.554899   12372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:42.555148   12372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:42.555577   12372 out.go:298] Setting JSON to false
	I0318 04:00:42.555660   12372 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:42.555866   12372 notify.go:220] Checking for updates...
	I0318 04:00:42.556104   12372 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:42.556124   12372 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:42.556436   12372 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:42.556443   12372 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:42.556446   12372 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (79.768167ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:00:51.187194   12377 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:00:51.187366   12377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:51.187371   12377 out.go:304] Setting ErrFile to fd 2...
	I0318 04:00:51.187374   12377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:00:51.187563   12377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:00:51.187730   12377 out.go:298] Setting JSON to false
	I0318 04:00:51.187744   12377 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:00:51.187786   12377 notify.go:220] Checking for updates...
	I0318 04:00:51.187998   12377 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:00:51.188007   12377 status.go:255] checking status of multinode-886000 ...
	I0318 04:00:51.188299   12377 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:00:51.188304   12377 status.go:343] host is not running, skipping remaining checks
	I0318 04:00:51.188307   12377 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr: exit status 7 (75.533916ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:04.104985   12381 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:04.105186   12381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:04.105191   12381 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:04.105194   12381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:04.105354   12381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:04.105515   12381 out.go:298] Setting JSON to false
	I0318 04:01:04.105530   12381 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:01:04.105559   12381 notify.go:220] Checking for updates...
	I0318 04:01:04.105771   12381 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:04.105779   12381 status.go:255] checking status of multinode-886000 ...
	I0318 04:01:04.106057   12381 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:01:04.106062   12381 status.go:343] host is not running, skipping remaining checks
	I0318 04:01:04.106065   12381 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-886000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (34.812542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (42.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-886000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-886000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-886000: (3.073302667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-886000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-886000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.23241125s)

                                                
                                                
-- stdout --
	* [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	* Restarting existing qemu2 VM for "multinode-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:07.316393   12405 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:07.316530   12405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:07.316534   12405 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:07.316537   12405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:07.316694   12405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:07.317860   12405 out.go:298] Setting JSON to false
	I0318 04:01:07.337034   12405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7240,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:01:07.337100   12405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:01:07.341928   12405 out.go:177] * [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:01:07.348839   12405 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:01:07.348882   12405 notify.go:220] Checking for updates...
	I0318 04:01:07.356791   12405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:01:07.360870   12405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:01:07.363766   12405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:01:07.366805   12405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:01:07.369837   12405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:01:07.371581   12405 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:07.371637   12405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:01:07.375789   12405 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:01:07.382686   12405 start.go:297] selected driver: qemu2
	I0318 04:01:07.382693   12405 start.go:901] validating driver "qemu2" against &{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:01:07.382752   12405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:01:07.385093   12405 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:01:07.385145   12405 cni.go:84] Creating CNI manager for ""
	I0318 04:01:07.385154   12405 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:01:07.385196   12405 start.go:340] cluster config:
	{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:01:07.389736   12405 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:07.397881   12405 out.go:177] * Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	I0318 04:01:07.401768   12405 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:01:07.401783   12405 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:01:07.401793   12405 cache.go:56] Caching tarball of preloaded images
	I0318 04:01:07.401860   12405 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:01:07.401866   12405 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:01:07.401917   12405 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/multinode-886000/config.json ...
	I0318 04:01:07.402357   12405 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:07.402392   12405 start.go:364] duration metric: took 28.709µs to acquireMachinesLock for "multinode-886000"
	I0318 04:01:07.402402   12405 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:01:07.402407   12405 fix.go:54] fixHost starting: 
	I0318 04:01:07.402533   12405 fix.go:112] recreateIfNeeded on multinode-886000: state=Stopped err=<nil>
	W0318 04:01:07.402542   12405 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:01:07.406826   12405 out.go:177] * Restarting existing qemu2 VM for "multinode-886000" ...
	I0318 04:01:07.414820   12405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ef:aa:6e:14:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 04:01:07.417025   12405 main.go:141] libmachine: STDOUT: 
	I0318 04:01:07.417045   12405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:07.417077   12405 fix.go:56] duration metric: took 14.669917ms for fixHost
	I0318 04:01:07.417082   12405 start.go:83] releasing machines lock for "multinode-886000", held for 14.685208ms
	W0318 04:01:07.417090   12405 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:01:07.417127   12405 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:07.417133   12405 start.go:728] Will try again in 5 seconds ...
	I0318 04:01:12.419200   12405 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:12.419619   12405 start.go:364] duration metric: took 319.959µs to acquireMachinesLock for "multinode-886000"
	I0318 04:01:12.419749   12405 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:01:12.419767   12405 fix.go:54] fixHost starting: 
	I0318 04:01:12.420439   12405 fix.go:112] recreateIfNeeded on multinode-886000: state=Stopped err=<nil>
	W0318 04:01:12.420470   12405 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:01:12.429852   12405 out.go:177] * Restarting existing qemu2 VM for "multinode-886000" ...
	I0318 04:01:12.433872   12405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ef:aa:6e:14:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 04:01:12.443505   12405 main.go:141] libmachine: STDOUT: 
	I0318 04:01:12.443564   12405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:12.443627   12405 fix.go:56] duration metric: took 23.861125ms for fixHost
	I0318 04:01:12.443644   12405 start.go:83] releasing machines lock for "multinode-886000", held for 24.003042ms
	W0318 04:01:12.443822   12405 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:12.451857   12405 out.go:177] 
	W0318 04:01:12.455929   12405 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:01:12.455960   12405 out.go:239] * 
	* 
	W0318 04:01:12.458387   12405 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:01:12.465822   12405 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-886000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-886000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (34.565916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 node delete m03: exit status 83 (43.684917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-886000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-886000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr: exit status 7 (31.892542ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:12.660235   12419 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:12.660388   12419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:12.660392   12419 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:12.660394   12419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:12.660521   12419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:12.660640   12419 out.go:298] Setting JSON to false
	I0318 04:01:12.660652   12419 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:01:12.660707   12419 notify.go:220] Checking for updates...
	I0318 04:01:12.660847   12419 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:12.660857   12419 status.go:255] checking status of multinode-886000 ...
	I0318 04:01:12.661057   12419 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:01:12.661061   12419 status.go:343] host is not running, skipping remaining checks
	I0318 04:01:12.661063   12419 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.1145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-886000 stop: (2.112987834s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status: exit status 7 (68.068541ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr: exit status 7 (34.176958ms)

                                                
                                                
-- stdout --
	multinode-886000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:14.908344   12437 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:14.908479   12437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:14.908482   12437 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:14.908485   12437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:14.908603   12437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:14.908723   12437 out.go:298] Setting JSON to false
	I0318 04:01:14.908737   12437 mustload.go:65] Loading cluster: multinode-886000
	I0318 04:01:14.908790   12437 notify.go:220] Checking for updates...
	I0318 04:01:14.908933   12437 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:14.908939   12437 status.go:255] checking status of multinode-886000 ...
	I0318 04:01:14.909141   12437 status.go:330] multinode-886000 host status = "Stopped" (err=<nil>)
	I0318 04:01:14.909144   12437 status.go:343] host is not running, skipping remaining checks
	I0318 04:01:14.909147   12437 status.go:257] multinode-886000 status: &{Name:multinode-886000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr": multinode-886000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-886000 status --alsologtostderr": multinode-886000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.45975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-886000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-886000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183962791s)

                                                
                                                
-- stdout --
	* [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	* Restarting existing qemu2 VM for "multinode-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:14.972863   12441 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:14.972974   12441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:14.972977   12441 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:14.972979   12441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:14.973108   12441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:14.974076   12441 out.go:298] Setting JSON to false
	I0318 04:01:14.990103   12441 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7247,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:01:14.990168   12441 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:01:14.994834   12441 out.go:177] * [multinode-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:01:15.002700   12441 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:01:15.002752   12441 notify.go:220] Checking for updates...
	I0318 04:01:15.006750   12441 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:01:15.009722   12441 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:01:15.012767   12441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:01:15.015775   12441 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:01:15.017345   12441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:01:15.021039   12441 config.go:182] Loaded profile config "multinode-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:15.021338   12441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:01:15.025732   12441 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:01:15.031686   12441 start.go:297] selected driver: qemu2
	I0318 04:01:15.031692   12441 start.go:901] validating driver "qemu2" against &{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:01:15.031772   12441 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:01:15.033981   12441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:01:15.034028   12441 cni.go:84] Creating CNI manager for ""
	I0318 04:01:15.034035   12441 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:01:15.034082   12441 start.go:340] cluster config:
	{Name:multinode-886000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-886000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:01:15.038411   12441 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:15.046673   12441 out.go:177] * Starting "multinode-886000" primary control-plane node in "multinode-886000" cluster
	I0318 04:01:15.050754   12441 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:01:15.050770   12441 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:01:15.050780   12441 cache.go:56] Caching tarball of preloaded images
	I0318 04:01:15.050831   12441 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:01:15.050836   12441 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:01:15.050904   12441 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/multinode-886000/config.json ...
	I0318 04:01:15.051378   12441 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:15.051404   12441 start.go:364] duration metric: took 20.167µs to acquireMachinesLock for "multinode-886000"
	I0318 04:01:15.051412   12441 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:01:15.051417   12441 fix.go:54] fixHost starting: 
	I0318 04:01:15.051531   12441 fix.go:112] recreateIfNeeded on multinode-886000: state=Stopped err=<nil>
	W0318 04:01:15.051539   12441 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:01:15.054795   12441 out.go:177] * Restarting existing qemu2 VM for "multinode-886000" ...
	I0318 04:01:15.062821   12441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ef:aa:6e:14:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 04:01:15.064767   12441 main.go:141] libmachine: STDOUT: 
	I0318 04:01:15.064785   12441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:15.064813   12441 fix.go:56] duration metric: took 13.394958ms for fixHost
	I0318 04:01:15.064818   12441 start.go:83] releasing machines lock for "multinode-886000", held for 13.411417ms
	W0318 04:01:15.064825   12441 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:01:15.064859   12441 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:15.064863   12441 start.go:728] Will try again in 5 seconds ...
	I0318 04:01:20.065843   12441 start.go:360] acquireMachinesLock for multinode-886000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:20.066162   12441 start.go:364] duration metric: took 246.959µs to acquireMachinesLock for "multinode-886000"
	I0318 04:01:20.066280   12441 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:01:20.066325   12441 fix.go:54] fixHost starting: 
	I0318 04:01:20.066999   12441 fix.go:112] recreateIfNeeded on multinode-886000: state=Stopped err=<nil>
	W0318 04:01:20.067026   12441 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:01:20.076394   12441 out.go:177] * Restarting existing qemu2 VM for "multinode-886000" ...
	I0318 04:01:20.080639   12441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ef:aa:6e:14:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/multinode-886000/disk.qcow2
	I0318 04:01:20.090204   12441 main.go:141] libmachine: STDOUT: 
	I0318 04:01:20.090328   12441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:20.090401   12441 fix.go:56] duration metric: took 24.10575ms for fixHost
	I0318 04:01:20.090419   12441 start.go:83] releasing machines lock for "multinode-886000", held for 24.238875ms
	W0318 04:01:20.090568   12441 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:20.098176   12441 out.go:177] 
	W0318 04:01:20.102406   12441 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:01:20.102430   12441 out.go:239] * 
	* 
	W0318 04:01:20.105228   12441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:01:20.113440   12441 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-886000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (70.217042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-886000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-886000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-886000-m01 --driver=qemu2 : exit status 80 (10.046115541s)

                                                
                                                
-- stdout --
	* [multinode-886000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-886000-m01" primary control-plane node in "multinode-886000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-886000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-886000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-886000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-886000-m02 --driver=qemu2 : exit status 80 (9.996618916s)

                                                
                                                
-- stdout --
	* [multinode-886000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-886000-m02" primary control-plane node in "multinode-886000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-886000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-886000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-886000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-886000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-886000: exit status 83 (82.4105ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-886000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-886000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-886000 -n multinode-886000: exit status 7 (32.875791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.30s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-545000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-545000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.911656042s)

                                                
                                                
-- stdout --
	* [test-preload-545000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-545000" primary control-plane node in "test-preload-545000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-545000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:01:40.680252   12498 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:01:40.680429   12498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:40.680432   12498 out.go:304] Setting ErrFile to fd 2...
	I0318 04:01:40.680435   12498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:01:40.680556   12498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:01:40.681582   12498 out.go:298] Setting JSON to false
	I0318 04:01:40.697676   12498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7273,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:01:40.697732   12498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:01:40.704501   12498 out.go:177] * [test-preload-545000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:01:40.712459   12498 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:01:40.717446   12498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:01:40.712513   12498 notify.go:220] Checking for updates...
	I0318 04:01:40.722480   12498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:01:40.726439   12498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:01:40.727825   12498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:01:40.730400   12498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:01:40.733837   12498 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:01:40.733903   12498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:01:40.738288   12498 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:01:40.745462   12498 start.go:297] selected driver: qemu2
	I0318 04:01:40.745468   12498 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:01:40.745475   12498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:01:40.747751   12498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:01:40.751494   12498 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:01:40.754524   12498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:01:40.754577   12498 cni.go:84] Creating CNI manager for ""
	I0318 04:01:40.754585   12498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:01:40.754591   12498 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:01:40.754621   12498 start.go:340] cluster config:
	{Name:test-preload-545000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:01:40.759024   12498 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.767419   12498 out.go:177] * Starting "test-preload-545000" primary control-plane node in "test-preload-545000" cluster
	I0318 04:01:40.771416   12498 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0318 04:01:40.771501   12498 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/test-preload-545000/config.json ...
	I0318 04:01:40.771517   12498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/test-preload-545000/config.json: {Name:mk32cca0b79f92706bfa3ed72ee8cca61675da57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:01:40.771517   12498 cache.go:107] acquiring lock: {Name:mke787a1c43e6857f491d2d911b14826b4716c98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771536   12498 cache.go:107] acquiring lock: {Name:mk615034d69bfd17903704e0cb567a2baad23d2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771594   12498 cache.go:107] acquiring lock: {Name:mk346d3f75dbc621a799984d2ae3cbae5ac393c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771535   12498 cache.go:107] acquiring lock: {Name:mk2c00731fdac674a297c8fa6d5e181022df50db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771730   12498 cache.go:107] acquiring lock: {Name:mk371b96db4d636ccb803eeabae295ffdc6c5ff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771753   12498 cache.go:107] acquiring lock: {Name:mk34127d2c335908bbe003c877bf7c096f3a3b26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771778   12498 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:01:40.771777   12498 start.go:360] acquireMachinesLock for test-preload-545000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:40.771766   12498 cache.go:107] acquiring lock: {Name:mk3197c5c4d4d2ec40280670b7eae1a79ff599e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771792   12498 cache.go:107] acquiring lock: {Name:mkfed2123e9e798bb9280bdab492698cdd8762a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:01:40.771868   12498 start.go:364] duration metric: took 81.417µs to acquireMachinesLock for "test-preload-545000"
	I0318 04:01:40.771902   12498 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 04:01:40.771913   12498 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:01:40.771896   12498 start.go:93] Provisioning new machine with config: &{Name:test-preload-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:01:40.772016   12498 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 04:01:40.772018   12498 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:01:40.772069   12498 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 04:01:40.772043   12498 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 04:01:40.772070   12498 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:01:40.780436   12498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:01:40.772061   12498 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:01:40.786926   12498 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:01:40.787631   12498 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 04:01:40.787739   12498 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 04:01:40.791997   12498 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 04:01:40.792088   12498 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:01:40.792143   12498 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:01:40.792212   12498 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 04:01:40.792321   12498 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:01:40.798695   12498 start.go:159] libmachine.API.Create for "test-preload-545000" (driver="qemu2")
	I0318 04:01:40.798713   12498 client.go:168] LocalClient.Create starting
	I0318 04:01:40.798780   12498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:01:40.798808   12498 main.go:141] libmachine: Decoding PEM data...
	I0318 04:01:40.798819   12498 main.go:141] libmachine: Parsing certificate...
	I0318 04:01:40.798866   12498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:01:40.798888   12498 main.go:141] libmachine: Decoding PEM data...
	I0318 04:01:40.798893   12498 main.go:141] libmachine: Parsing certificate...
	I0318 04:01:40.799199   12498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:01:40.944481   12498 main.go:141] libmachine: Creating SSH key...
	I0318 04:01:41.085597   12498 main.go:141] libmachine: Creating Disk image...
	I0318 04:01:41.085623   12498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:01:41.085827   12498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:41.099108   12498 main.go:141] libmachine: STDOUT: 
	I0318 04:01:41.099134   12498 main.go:141] libmachine: STDERR: 
	I0318 04:01:41.099190   12498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2 +20000M
	I0318 04:01:41.111309   12498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:01:41.111331   12498 main.go:141] libmachine: STDERR: 
	I0318 04:01:41.111345   12498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:41.111349   12498 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:01:41.111383   12498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e6:4e:a2:41:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:41.113691   12498 main.go:141] libmachine: STDOUT: 
	I0318 04:01:41.113720   12498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:41.113746   12498 client.go:171] duration metric: took 315.036417ms to LocalClient.Create
	I0318 04:01:42.762008   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:01:42.811407   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 04:01:42.898600   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0318 04:01:42.899796   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 04:01:42.905904   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0318 04:01:42.910478   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0318 04:01:42.912836   12498 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:01:42.912940   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:01:43.037710   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0318 04:01:43.037763   12498 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.266241625s
	I0318 04:01:43.037808   12498 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0318 04:01:43.114700   12498 start.go:128] duration metric: took 2.342668041s to createHost
	I0318 04:01:43.114754   12498 start.go:83] releasing machines lock for "test-preload-545000", held for 2.342933917s
	W0318 04:01:43.114808   12498 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:43.131963   12498 out.go:177] * Deleting "test-preload-545000" in qemu2 ...
	W0318 04:01:43.161394   12498 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:43.161438   12498 start.go:728] Will try again in 5 seconds ...
	W0318 04:01:43.180495   12498 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:01:43.180622   12498 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:01:44.640577   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0318 04:01:44.640632   12498 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.868931709s
	I0318 04:01:44.640696   12498 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0318 04:01:44.858049   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:01:44.858114   12498 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.086696292s
	I0318 04:01:44.858143   12498 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:01:45.312547   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0318 04:01:45.312603   12498 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.541186542s
	I0318 04:01:45.312633   12498 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0318 04:01:45.503206   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0318 04:01:45.503277   12498 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.731868083s
	I0318 04:01:45.503308   12498 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0318 04:01:46.372343   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0318 04:01:46.372425   12498 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.600850042s
	I0318 04:01:46.372454   12498 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0318 04:01:46.969330   12498 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0318 04:01:46.969380   12498 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.197905292s
	I0318 04:01:46.969404   12498 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0318 04:01:48.161689   12498 start.go:360] acquireMachinesLock for test-preload-545000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:01:48.162123   12498 start.go:364] duration metric: took 351.166µs to acquireMachinesLock for "test-preload-545000"
	I0318 04:01:48.162257   12498 start.go:93] Provisioning new machine with config: &{Name:test-preload-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:01:48.162554   12498 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:01:48.169338   12498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:01:48.220755   12498 start.go:159] libmachine.API.Create for "test-preload-545000" (driver="qemu2")
	I0318 04:01:48.220811   12498 client.go:168] LocalClient.Create starting
	I0318 04:01:48.220922   12498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:01:48.220979   12498 main.go:141] libmachine: Decoding PEM data...
	I0318 04:01:48.221004   12498 main.go:141] libmachine: Parsing certificate...
	I0318 04:01:48.221093   12498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:01:48.221134   12498 main.go:141] libmachine: Decoding PEM data...
	I0318 04:01:48.221147   12498 main.go:141] libmachine: Parsing certificate...
	I0318 04:01:48.221680   12498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:01:48.372820   12498 main.go:141] libmachine: Creating SSH key...
	I0318 04:01:48.483187   12498 main.go:141] libmachine: Creating Disk image...
	I0318 04:01:48.483192   12498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:01:48.483383   12498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:48.496092   12498 main.go:141] libmachine: STDOUT: 
	I0318 04:01:48.496115   12498 main.go:141] libmachine: STDERR: 
	I0318 04:01:48.496172   12498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2 +20000M
	I0318 04:01:48.507286   12498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:01:48.507311   12498 main.go:141] libmachine: STDERR: 
	I0318 04:01:48.507323   12498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:48.507328   12498 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:01:48.507366   12498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:0e:29:cc:e6:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/test-preload-545000/disk.qcow2
	I0318 04:01:48.509300   12498 main.go:141] libmachine: STDOUT: 
	I0318 04:01:48.509318   12498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:01:48.509331   12498 client.go:171] duration metric: took 288.522333ms to LocalClient.Create
	I0318 04:01:50.509856   12498 start.go:128] duration metric: took 2.347310375s to createHost
	I0318 04:01:50.509919   12498 start.go:83] releasing machines lock for "test-preload-545000", held for 2.347819916s
	W0318 04:01:50.510124   12498 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:01:50.521756   12498 out.go:177] 
	W0318 04:01:50.528761   12498 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:01:50.528789   12498 out.go:239] * 
	* 
	W0318 04:01:50.531542   12498 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:01:50.542647   12498 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-545000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-18 04:01:50.563253 -0700 PDT m=+740.954491334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-545000 -n test-preload-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-545000 -n test-preload-545000: exit status 7 (67.721292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-545000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-545000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-545000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (10.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-278000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-278000 --memory=2048 --driver=qemu2 : exit status 80 (10.073164375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-278000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-278000" primary control-plane node in "scheduled-stop-278000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-278000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-278000" primary control-plane node in "scheduled-stop-278000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-18 04:02:00.810911 -0700 PDT m=+751.202405084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-278000 -n scheduled-stop-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-278000 -n scheduled-stop-278000: exit status 7 (67.722375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-278000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-278000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-278000
--- FAIL: TestScheduledStopUnix (10.25s)

                                                
                                    
x
+
TestSkaffold (16.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2967221042 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-306000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-306000 --memory=2600 --driver=qemu2 : exit status 80 (9.969703708s)

                                                
                                                
-- stdout --
	* [skaffold-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-306000" primary control-plane node in "skaffold-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-306000" primary control-plane node in "skaffold-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-18 04:02:17.364685 -0700 PDT m=+767.756592667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-306000 -n skaffold-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-306000 -n skaffold-306000: exit status 7 (63.695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-306000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-306000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-306000
--- FAIL: TestSkaffold (16.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (625.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.342445548 start -p running-upgrade-727000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.342445548 start -p running-upgrade-727000 --memory=2200 --vm-driver=qemu2 : (1m9.052878875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-727000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-727000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m36.978017333s)

                                                
                                                
-- stdout --
	* [running-upgrade-727000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-727000" primary control-plane node in "running-upgrade-727000" cluster
	* Updating the running qemu2 "running-upgrade-727000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:03:53.186918   12818 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:03:53.187088   12818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:03:53.187092   12818 out.go:304] Setting ErrFile to fd 2...
	I0318 04:03:53.187094   12818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:03:53.187271   12818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:03:53.188687   12818 out.go:298] Setting JSON to false
	I0318 04:03:53.208045   12818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7406,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:03:53.208116   12818 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:03:53.213119   12818 out.go:177] * [running-upgrade-727000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:03:53.220479   12818 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:03:53.225127   12818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:03:53.220614   12818 notify.go:220] Checking for updates...
	I0318 04:03:53.228006   12818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:03:53.231997   12818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:03:53.235119   12818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:03:53.238060   12818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:03:53.241315   12818 config.go:182] Loaded profile config "running-upgrade-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:03:53.245304   12818 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:03:53.247033   12818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:03:53.251059   12818 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:03:53.257905   12818 start.go:297] selected driver: qemu2
	I0318 04:03:53.257911   12818 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:03:53.257981   12818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:03:53.261306   12818 cni.go:84] Creating CNI manager for ""
	I0318 04:03:53.261340   12818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:03:53.261399   12818 start.go:340] cluster config:
	{Name:running-upgrade-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:03:53.261458   12818 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:03:53.264103   12818 out.go:177] * Starting "running-upgrade-727000" primary control-plane node in "running-upgrade-727000" cluster
	I0318 04:03:53.271951   12818 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:03:53.271984   12818 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:03:53.272001   12818 cache.go:56] Caching tarball of preloaded images
	I0318 04:03:53.272052   12818 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:03:53.272058   12818 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:03:53.272111   12818 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/config.json ...
	I0318 04:03:53.272675   12818 start.go:360] acquireMachinesLock for running-upgrade-727000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:03:53.272712   12818 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "running-upgrade-727000"
	I0318 04:03:53.272729   12818 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:03:53.272882   12818 fix.go:54] fixHost starting: 
	I0318 04:03:53.273651   12818 fix.go:112] recreateIfNeeded on running-upgrade-727000: state=Running err=<nil>
	W0318 04:03:53.273659   12818 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:03:53.280046   12818 out.go:177] * Updating the running qemu2 "running-upgrade-727000" VM ...
	I0318 04:03:53.288069   12818 machine.go:94] provisionDockerMachine start ...
	I0318 04:03:53.288279   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.288572   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.288576   12818 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:03:53.362542   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-727000
	
	I0318 04:03:53.363746   12818 buildroot.go:166] provisioning hostname "running-upgrade-727000"
	I0318 04:03:53.363797   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.363901   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.363906   12818 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-727000 && echo "running-upgrade-727000" | sudo tee /etc/hostname
	I0318 04:03:53.437726   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-727000
	
	I0318 04:03:53.437779   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.437881   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.437890   12818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-727000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-727000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-727000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:03:53.508232   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:03:53.508244   12818 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18431-10628/.minikube CaCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18431-10628/.minikube}
	I0318 04:03:53.508255   12818 buildroot.go:174] setting up certificates
	I0318 04:03:53.508274   12818 provision.go:84] configureAuth start
	I0318 04:03:53.508289   12818 provision.go:143] copyHostCerts
	I0318 04:03:53.508381   12818 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem, removing ...
	I0318 04:03:53.509055   12818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem
	I0318 04:03:53.509187   12818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem (1675 bytes)
	I0318 04:03:53.509358   12818 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem, removing ...
	I0318 04:03:53.509362   12818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem
	I0318 04:03:53.509403   12818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem (1078 bytes)
	I0318 04:03:53.509501   12818 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem, removing ...
	I0318 04:03:53.509505   12818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem
	I0318 04:03:53.509541   12818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem (1123 bytes)
	I0318 04:03:53.509625   12818 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-727000 san=[127.0.0.1 localhost minikube running-upgrade-727000]
	I0318 04:03:53.580876   12818 provision.go:177] copyRemoteCerts
	I0318 04:03:53.581200   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:03:53.581220   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:03:53.620912   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 04:03:53.627879   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:03:53.634711   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 04:03:53.641560   12818 provision.go:87] duration metric: took 133.265666ms to configureAuth
	I0318 04:03:53.641569   12818 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:03:53.641684   12818 config.go:182] Loaded profile config "running-upgrade-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:03:53.641721   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.641811   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.641815   12818 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:03:53.713854   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:03:53.713869   12818 buildroot.go:70] root file system type: tmpfs
	I0318 04:03:53.713926   12818 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:03:53.713982   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.714093   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.714126   12818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:03:53.788502   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:03:53.788562   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.788669   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.788677   12818 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:03:53.860804   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:03:53.860815   12818 machine.go:97] duration metric: took 572.755041ms to provisionDockerMachine
	I0318 04:03:53.860821   12818 start.go:293] postStartSetup for "running-upgrade-727000" (driver="qemu2")
	I0318 04:03:53.860828   12818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:03:53.860883   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:03:53.860892   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:03:53.898975   12818 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:03:53.900293   12818 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:03:53.900301   12818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/addons for local assets ...
	I0318 04:03:53.900357   12818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/files for local assets ...
	I0318 04:03:53.900445   12818 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem -> 110352.pem in /etc/ssl/certs
	I0318 04:03:53.900530   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:03:53.903307   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:03:53.910323   12818 start.go:296] duration metric: took 49.497959ms for postStartSetup
	I0318 04:03:53.910334   12818 fix.go:56] duration metric: took 637.617042ms for fixHost
	I0318 04:03:53.910372   12818 main.go:141] libmachine: Using SSH client type: native
	I0318 04:03:53.910472   12818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105041bf0] 0x105044450 <nil>  [] 0s} localhost 52269 <nil> <nil>}
	I0318 04:03:53.910476   12818 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 04:03:53.981815   12818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710759833.708974057
	
	I0318 04:03:53.981827   12818 fix.go:216] guest clock: 1710759833.708974057
	I0318 04:03:53.981832   12818 fix.go:229] Guest: 2024-03-18 04:03:53.708974057 -0700 PDT Remote: 2024-03-18 04:03:53.910336 -0700 PDT m=+0.822797751 (delta=-201.361943ms)
	I0318 04:03:53.981844   12818 fix.go:200] guest clock delta is within tolerance: -201.361943ms
	I0318 04:03:53.981847   12818 start.go:83] releasing machines lock for "running-upgrade-727000", held for 709.149209ms
	I0318 04:03:53.981928   12818 ssh_runner.go:195] Run: cat /version.json
	I0318 04:03:53.981938   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:03:53.982004   12818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:03:53.982086   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	W0318 04:03:53.982636   12818 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52269: connect: connection refused
	I0318 04:03:53.982669   12818 retry.go:31] will retry after 130.414192ms: dial tcp [::1]:52269: connect: connection refused
	W0318 04:03:54.020158   12818 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:03:54.020255   12818 ssh_runner.go:195] Run: systemctl --version
	I0318 04:03:54.022453   12818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:03:54.024196   12818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:03:54.024224   12818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:03:54.027485   12818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:03:54.032403   12818 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:03:54.032416   12818 start.go:494] detecting cgroup driver to use...
	I0318 04:03:54.032787   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:03:54.038343   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:03:54.041338   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:03:54.044148   12818 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:03:54.044178   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:03:54.047187   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:03:54.050703   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:03:54.054338   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:03:54.059099   12818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:03:54.062396   12818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:03:54.065490   12818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:03:54.068234   12818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:03:54.071445   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:03:54.173472   12818 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:03:54.184388   12818 start.go:494] detecting cgroup driver to use...
	I0318 04:03:54.184475   12818 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:03:54.190549   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:03:54.195398   12818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:03:54.201265   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:03:54.205591   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:03:54.209970   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:03:54.215545   12818 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:03:54.217227   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:03:54.258933   12818 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:03:54.263972   12818 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:03:54.362126   12818 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:03:54.455151   12818 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:03:54.455219   12818 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:03:54.461619   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:03:54.557421   12818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:07.809787   12818 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.252679083s)
	I0318 04:04:07.809865   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:04:07.814702   12818 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 04:04:07.822213   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:07.829904   12818 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:04:07.894595   12818 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:04:07.965749   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:08.059193   12818 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:04:08.066493   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:08.071595   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:08.150519   12818 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:04:08.191927   12818 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:04:08.192921   12818 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:04:08.195105   12818 start.go:562] Will wait 60s for crictl version
	I0318 04:04:08.195153   12818 ssh_runner.go:195] Run: which crictl
	I0318 04:04:08.196539   12818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:04:08.208055   12818 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:04:08.208124   12818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:08.220649   12818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:08.235977   12818 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:04:08.236106   12818 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:04:08.237808   12818 kubeadm.go:877] updating cluster {Name:running-upgrade-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:04:08.237878   12818 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:04:08.237919   12818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:08.248247   12818 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:08.248255   12818 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:08.248301   12818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:08.251202   12818 ssh_runner.go:195] Run: which lz4
	I0318 04:04:08.252342   12818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 04:04:08.253477   12818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:04:08.253487   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:04:08.981930   12818 docker.go:649] duration metric: took 729.638667ms to copy over tarball
	I0318 04:04:08.981991   12818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:04:10.368274   12818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.386304584s)
	I0318 04:04:10.368287   12818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:04:10.384085   12818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:10.387437   12818 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:04:10.392500   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:10.468560   12818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:11.689884   12818 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.221323417s)
	I0318 04:04:11.690001   12818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:11.700993   12818 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:11.701007   12818 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:11.701012   12818 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:04:11.707850   12818 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:11.707850   12818 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:11.707901   12818 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:11.707924   12818 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:11.707982   12818 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:04:11.708078   12818 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:11.708465   12818 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:11.708534   12818 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:11.717861   12818 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:11.717952   12818 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:04:11.718634   12818 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:11.718880   12818 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:11.718931   12818 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:11.718952   12818 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:11.718834   12818 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:11.718988   12818 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	W0318 04:04:13.659163   12818 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:13.659862   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:13.696846   12818 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:04:13.696957   12818 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:13.697066   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:13.715662   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:13.726111   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:04:13.726389   12818 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:13.737577   12818 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:04:13.737602   12818 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:13.737632   12818 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:04:13.737682   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:13.737683   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:04:13.758031   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:04:13.772848   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:13.784914   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:04:13.801097   12818 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:04:13.801119   12818 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:13.801175   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:13.803069   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:13.806338   12818 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:13.806352   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:04:13.809257   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:13.812030   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:13.831546   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:04:13.831536   12818 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:04:13.831599   12818 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:04:13.831680   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:04:13.836927   12818 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:04:13.836944   12818 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:13.837009   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:13.887302   12818 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 04:04:13.887385   12818 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:04:13.887391   12818 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:04:13.887404   12818 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:13.887404   12818 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:13.887467   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:13.887474   12818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:13.887545   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:04:13.887572   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:04:13.887650   12818 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 04:04:13.906541   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:04:13.906884   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:04:13.906929   12818 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:04:13.906944   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:04:13.915240   12818 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:04:13.915255   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:04:13.944367   12818 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 04:04:14.272027   12818 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:14.272179   12818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:14.283863   12818 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:04:14.283886   12818 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:14.283948   12818 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:15.298137   12818 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.014176167s)
	I0318 04:04:15.298182   12818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:04:15.298680   12818 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:15.304180   12818 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:04:15.304256   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:04:15.356591   12818 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:15.356607   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:04:15.593277   12818 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:04:15.593319   12818 cache_images.go:92] duration metric: took 3.892398125s to LoadCachedImages
	W0318 04:04:15.593617   12818 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0318 04:04:15.593625   12818 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:04:15.593709   12818 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-727000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:04:15.593775   12818 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:04:15.607745   12818 cni.go:84] Creating CNI manager for ""
	I0318 04:04:15.607756   12818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:04:15.607761   12818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:04:15.607769   12818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-727000 NodeName:running-upgrade-727000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:04:15.607829   12818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-727000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:04:15.607885   12818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:04:15.610791   12818 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:04:15.610822   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:04:15.613530   12818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:04:15.618430   12818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:04:15.623556   12818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:04:15.628683   12818 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:04:15.630135   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:15.719247   12818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:04:15.726424   12818 certs.go:68] Setting up /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000 for IP: 10.0.2.15
	I0318 04:04:15.726432   12818 certs.go:194] generating shared ca certs ...
	I0318 04:04:15.726442   12818 certs.go:226] acquiring lock for ca certs: {Name:mk88e7d2813085015a17ae8f4e4d52088bf47170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:15.726638   12818 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key
	I0318 04:04:15.726672   12818 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key
	I0318 04:04:15.726677   12818 certs.go:256] generating profile certs ...
	I0318 04:04:15.726731   12818 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.key
	I0318 04:04:15.726742   12818 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key.28752afb
	I0318 04:04:15.726904   12818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt.28752afb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:04:15.804176   12818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt.28752afb ...
	I0318 04:04:15.804195   12818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt.28752afb: {Name:mk24623542f67f241f005bc18466cfb8c5dbfe20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:15.804491   12818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key.28752afb ...
	I0318 04:04:15.804496   12818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key.28752afb: {Name:mkb4082ac46fd0907481e4e63a1a38f770e0bb0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:15.804634   12818 certs.go:381] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt.28752afb -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt
	I0318 04:04:15.804740   12818 certs.go:385] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key.28752afb -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key
	I0318 04:04:15.804974   12818 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/proxy-client.key
	I0318 04:04:15.805139   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem (1338 bytes)
	W0318 04:04:15.805167   12818 certs.go:480] ignoring /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035_empty.pem, impossibly tiny 0 bytes
	I0318 04:04:15.805174   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 04:04:15.805202   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem (1078 bytes)
	I0318 04:04:15.805219   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:04:15.805240   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem (1675 bytes)
	I0318 04:04:15.805291   12818 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:04:15.806008   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:04:15.813283   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 04:04:15.820488   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:04:15.827875   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 04:04:15.834317   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:04:15.840666   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 04:04:15.847858   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:04:15.854943   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 04:04:15.861478   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /usr/share/ca-certificates/110352.pem (1708 bytes)
	I0318 04:04:15.867922   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:04:15.874879   12818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem --> /usr/share/ca-certificates/11035.pem (1338 bytes)
	I0318 04:04:15.881640   12818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:04:15.886290   12818 ssh_runner.go:195] Run: openssl version
	I0318 04:04:15.888138   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110352.pem && ln -fs /usr/share/ca-certificates/110352.pem /etc/ssl/certs/110352.pem"
	I0318 04:04:15.891721   12818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110352.pem
	I0318 04:04:15.893320   12818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:51 /usr/share/ca-certificates/110352.pem
	I0318 04:04:15.893340   12818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110352.pem
	I0318 04:04:15.895181   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110352.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:04:15.897867   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:04:15.900727   12818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:15.902342   12818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:15.902360   12818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:15.904085   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:04:15.907315   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11035.pem && ln -fs /usr/share/ca-certificates/11035.pem /etc/ssl/certs/11035.pem"
	I0318 04:04:15.910314   12818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11035.pem
	I0318 04:04:15.911665   12818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:51 /usr/share/ca-certificates/11035.pem
	I0318 04:04:15.911687   12818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11035.pem
	I0318 04:04:15.913424   12818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11035.pem /etc/ssl/certs/51391683.0"
	I0318 04:04:15.916184   12818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:04:15.917626   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:04:15.919415   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:04:15.921159   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:04:15.922886   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:04:15.925522   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:04:15.927275   12818 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:04:15.929340   12818 kubeadm.go:391] StartCluster: {Name:running-upgrade-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:15.929424   12818 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:15.940073   12818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:04:15.943655   12818 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:04:15.943662   12818 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:04:15.943664   12818 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:04:15.943686   12818 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:04:15.947074   12818 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:15.947108   12818 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-727000" does not appear in /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:04:15.947126   12818 kubeconfig.go:62] /Users/jenkins/minikube-integration/18431-10628/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-727000" cluster setting kubeconfig missing "running-upgrade-727000" context setting]
	I0318 04:04:15.947307   12818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:15.948581   12818 kapi.go:59] client config for running-upgrade-727000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106332a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:04:15.953373   12818 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:04:15.956299   12818 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-727000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:04:15.956308   12818 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:04:15.956346   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:15.967682   12818 docker.go:483] Stopping containers: [35799c26567d 2012f3d75fdb 5539acdeb319 39092562a330 c18954a8ecc0 5cf992ffe371 6b9cc30d55f9 1f06060e91b5 e020ee6903f4 eb3a19a7d0ae b3b0593f1566 756e804d2063 f495862edcc8 750924b8536c 970608f5aedb 82f8a973bb4d]
	I0318 04:04:15.967754   12818 ssh_runner.go:195] Run: docker stop 35799c26567d 2012f3d75fdb 5539acdeb319 39092562a330 c18954a8ecc0 5cf992ffe371 6b9cc30d55f9 1f06060e91b5 e020ee6903f4 eb3a19a7d0ae b3b0593f1566 756e804d2063 f495862edcc8 750924b8536c 970608f5aedb 82f8a973bb4d
	I0318 04:04:15.978954   12818 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:04:16.081811   12818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:04:16.086147   12818 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 18 11:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 18 11:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 18 11:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 18 11:03 /etc/kubernetes/scheduler.conf
	
	I0318 04:04:16.086181   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf
	I0318 04:04:16.089542   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:16.089565   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:04:16.093153   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf
	I0318 04:04:16.096262   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:16.096289   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:04:16.099303   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf
	I0318 04:04:16.102509   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:16.102544   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:04:16.105456   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf
	I0318 04:04:16.107984   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:16.108002   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:04:16.110872   12818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:04:16.113959   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:16.136379   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:16.526129   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:16.724826   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:16.747264   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:16.771323   12818 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:04:16.771400   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:17.273785   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:17.773530   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:18.273446   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:18.281246   12818 api_server.go:72] duration metric: took 1.509961041s to wait for apiserver process to appear ...
	I0318 04:04:18.281260   12818 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:04:18.281488   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:23.283923   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:23.283944   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:28.284225   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:28.284286   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:33.284735   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:33.284821   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:38.285482   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:38.285514   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:43.286397   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:43.286496   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:48.288351   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:48.288369   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:53.289894   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:53.289934   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:58.291875   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:58.291900   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:03.293946   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:03.293972   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:08.296045   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:08.296088   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:13.297877   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:13.297896   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:18.299954   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:18.300166   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:18.316215   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:18.316293   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:18.328576   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:18.328650   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:18.338897   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:18.338976   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:18.349432   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:18.349522   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:18.359655   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:18.359715   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:18.370425   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:18.370514   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:18.380507   12818 logs.go:276] 0 containers: []
	W0318 04:05:18.380520   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:18.380579   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:18.391124   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:18.391142   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:18.391147   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:18.406465   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:18.406477   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:18.433149   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:18.433160   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:18.446786   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:18.446797   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:18.458221   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:18.458234   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:18.469618   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:18.469628   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:18.510326   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:18.510332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:18.521560   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:18.521569   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:18.538473   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:18.538483   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:18.554889   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:18.554898   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:18.559588   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:18.559596   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:18.656794   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:18.656803   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:18.672267   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:18.672280   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:18.686088   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:18.686101   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:18.697906   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:18.697921   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:18.739698   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:18.739708   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:18.753553   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:18.753566   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:21.268589   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:26.270898   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:26.271320   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:26.310624   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:26.310768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:26.331207   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:26.331337   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:26.346234   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:26.346314   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:26.362180   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:26.362263   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:26.372952   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:26.373015   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:26.383893   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:26.383974   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:26.394427   12818 logs.go:276] 0 containers: []
	W0318 04:05:26.394438   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:26.394511   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:26.405309   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:26.405326   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:26.405331   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:26.410121   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:26.410129   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:26.448288   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:26.448297   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:26.463630   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:26.463644   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:26.477927   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:26.477939   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:26.515965   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:26.515979   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:26.531133   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:26.531146   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:26.547171   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:26.547182   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:26.559056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:26.559071   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:26.574231   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:26.574241   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:26.585620   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:26.585632   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:26.599861   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:26.599871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:26.611320   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:26.611333   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:26.622783   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:26.622799   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:26.643082   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:26.643095   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:26.684020   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:26.684029   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:26.697970   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:26.697983   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:29.225264   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:34.227548   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:34.227752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:34.244801   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:34.244891   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:34.258059   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:34.258123   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:34.268927   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:34.268997   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:34.279433   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:34.279513   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:34.290355   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:34.290416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:34.305707   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:34.305778   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:34.316027   12818 logs.go:276] 0 containers: []
	W0318 04:05:34.316042   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:34.316094   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:34.327546   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:34.327573   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:34.327578   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:34.339053   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:34.339066   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:34.364411   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:34.364420   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:34.403410   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:34.403418   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:34.440220   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:34.440242   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:34.456161   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:34.456185   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:34.467604   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:34.467613   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:34.481918   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:34.481929   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:34.496229   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:34.496247   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:34.508279   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:34.508290   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:34.526983   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:34.526995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:34.531728   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:34.531734   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:34.573360   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:34.573371   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:34.588911   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:34.588922   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:34.605511   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:34.605525   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:34.616981   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:34.616993   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:34.631853   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:34.631871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:37.152016   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:42.153271   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:42.153488   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:42.170783   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:42.170892   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:42.184253   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:42.184341   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:42.195556   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:42.195641   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:42.206264   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:42.206336   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:42.217284   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:42.217354   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:42.228381   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:42.228443   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:42.238809   12818 logs.go:276] 0 containers: []
	W0318 04:05:42.238823   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:42.238893   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:42.249165   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:42.249183   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:42.249188   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:42.263141   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:42.263156   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:42.280389   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:42.280400   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:42.292498   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:42.292509   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:42.328376   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:42.328386   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:42.364869   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:42.364880   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:42.376162   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:42.376172   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:42.402135   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:42.402146   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:42.416908   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:42.416919   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:42.432056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:42.432067   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:42.449090   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:42.449101   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:42.460636   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:42.460646   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:42.474542   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:42.474552   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:42.488258   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:42.488268   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:42.499176   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:42.499186   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:42.510765   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:42.510777   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:42.549304   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:42.549314   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:45.055739   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:50.058003   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:50.058306   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:50.083041   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:50.083160   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:50.099705   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:50.099783   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:50.112809   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:50.112884   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:50.124201   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:50.124274   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:50.135802   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:50.135873   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:50.150614   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:50.150682   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:50.164320   12818 logs.go:276] 0 containers: []
	W0318 04:05:50.164332   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:50.164399   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:50.174250   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:50.174270   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:50.174275   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:50.189062   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:50.189074   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:50.213903   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:50.213910   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:50.227987   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:50.227998   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:50.239551   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:50.239564   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:50.278414   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:50.278430   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:50.282724   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:50.282730   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:50.296579   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:50.296590   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:50.311454   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:50.311465   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:50.323350   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:50.323360   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:50.335343   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:50.335357   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:50.360189   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:50.360201   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:50.374876   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:50.374888   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:50.386183   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:50.386196   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:50.423319   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:50.423332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:50.438930   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:50.438944   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:50.475346   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:50.475357   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:52.989749   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:57.992071   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:57.992258   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:58.009544   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:58.009638   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:58.023576   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:58.023665   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:58.035064   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:58.035133   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:58.045607   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:58.045669   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:58.056099   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:58.056165   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:58.068038   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:58.068111   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:58.078341   12818 logs.go:276] 0 containers: []
	W0318 04:05:58.078352   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:58.078409   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:58.089002   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:58.089018   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:58.089023   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:58.103255   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:58.103266   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:58.117454   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:58.117465   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:58.132173   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:58.132184   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:58.144152   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:58.144162   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:58.168882   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:58.168900   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:58.184003   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:58.184014   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:58.195746   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:58.195759   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:58.207636   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:58.207649   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:58.219556   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:58.219567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:58.256886   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:58.256896   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:58.261701   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:58.261711   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:58.301952   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:58.301965   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:58.313785   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:58.313797   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:58.331433   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:58.331446   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:58.347513   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:58.347526   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:58.359457   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:58.359468   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:00.896464   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:05.898920   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:05.899345   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:05.938551   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:05.938696   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:05.959502   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:05.959612   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:05.975013   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:05.975094   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:05.987621   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:05.987704   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:05.998189   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:05.998258   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:06.009030   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:06.009096   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:06.019059   12818 logs.go:276] 0 containers: []
	W0318 04:06:06.019077   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:06.019138   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:06.029900   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:06.029921   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:06.029928   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:06.034989   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:06.034997   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:06.046709   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:06.046720   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:06.058419   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:06.058431   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:06.073013   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:06.073027   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:06.090569   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:06.090579   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:06.102727   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:06.102739   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:06.141537   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:06.141547   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:06.177539   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:06.177552   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:06.191728   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:06.191741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:06.205617   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:06.205629   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:06.220332   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:06.220343   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:06.245674   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:06.245684   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:06.257694   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:06.257705   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:06.303773   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:06.303783   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:06.318416   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:06.318428   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:06.336828   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:06.336839   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:08.849942   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:13.852005   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:13.852254   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:13.872935   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:13.873051   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:13.887681   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:13.887760   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:13.899483   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:13.899566   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:13.910598   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:13.910674   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:13.921588   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:13.921661   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:13.932079   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:13.932149   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:13.947490   12818 logs.go:276] 0 containers: []
	W0318 04:06:13.947502   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:13.947564   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:13.958052   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:13.958068   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:13.958073   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:13.995908   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:13.995920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:14.010907   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:14.010918   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:14.022280   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:14.022292   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:14.033704   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:14.033716   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:14.045972   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:14.045986   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:14.060511   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:14.060521   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:14.072865   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:14.072878   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:14.091067   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:14.091079   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:14.115293   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:14.115300   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:14.150705   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:14.150717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:14.162992   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:14.163005   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:14.174948   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:14.174960   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:14.214485   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:14.214495   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:14.218788   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:14.218793   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:14.233139   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:14.233152   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:14.248109   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:14.248119   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:16.765401   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:21.767567   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:21.767709   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:21.779728   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:21.779796   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:21.790679   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:21.790756   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:21.801810   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:21.801878   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:21.817100   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:21.817201   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:21.835699   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:21.835769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:21.846419   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:21.846487   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:21.856440   12818 logs.go:276] 0 containers: []
	W0318 04:06:21.856451   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:21.856510   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:21.867099   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:21.867115   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:21.867120   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:21.901089   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:21.901103   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:21.915176   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:21.915190   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:21.951571   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:21.951589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:21.963321   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:21.963333   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:21.978605   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:21.978617   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:21.991922   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:21.991933   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:22.030682   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:22.030690   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:22.044418   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:22.044428   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:22.059043   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:22.059055   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:22.071376   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:22.071386   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:22.086650   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:22.086663   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:22.098580   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:22.098591   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:22.119428   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:22.119438   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:22.124228   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:22.124239   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:22.135969   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:22.135979   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:22.161732   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:22.161743   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:24.675528   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:29.677854   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:29.678261   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:29.714705   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:29.714836   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:29.733727   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:29.733823   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:29.756255   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:29.756335   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:29.768311   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:29.768387   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:29.779452   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:29.779519   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:29.790253   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:29.790326   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:29.815678   12818 logs.go:276] 0 containers: []
	W0318 04:06:29.815691   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:29.815752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:29.825899   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:29.825915   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:29.825920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:29.840889   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:29.840899   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:29.852427   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:29.852442   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:29.864576   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:29.864590   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:29.876555   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:29.876567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:29.891014   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:29.891024   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:29.914645   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:29.914654   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:29.918734   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:29.918743   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:29.938267   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:29.938282   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:29.949988   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:29.949999   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:29.964599   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:29.964609   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:29.982547   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:29.982559   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:30.021758   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:30.021768   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:30.056748   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:30.056760   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:30.095406   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:30.095419   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:30.109643   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:30.109654   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:30.121069   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:30.121080   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:32.634766   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:37.637000   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:37.637281   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:37.671412   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:37.671555   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:37.691701   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:37.691810   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:37.706740   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:37.706827   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:37.721011   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:37.721237   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:37.732812   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:37.732873   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:37.750748   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:37.750808   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:37.761505   12818 logs.go:276] 0 containers: []
	W0318 04:06:37.761518   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:37.761567   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:37.772318   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:37.772334   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:37.772339   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:37.784227   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:37.784238   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:37.812883   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:37.812898   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:37.848494   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:37.848508   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:37.863771   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:37.863782   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:37.902637   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:37.902649   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:37.920163   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:37.920178   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:37.935612   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:37.935622   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:37.947383   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:37.947394   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:37.951696   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:37.951707   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:37.965566   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:37.965576   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:37.977819   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:37.977833   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:38.001561   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:38.001567   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:38.041120   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:38.041129   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:38.056163   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:38.056174   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:38.071482   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:38.071498   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:38.083996   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:38.084012   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:40.598117   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:45.600426   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:45.600699   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:45.620759   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:45.620858   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:45.635535   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:45.635611   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:45.647897   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:45.647970   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:45.658201   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:45.658265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:45.668297   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:45.668354   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:45.678788   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:45.678856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:45.688687   12818 logs.go:276] 0 containers: []
	W0318 04:06:45.688699   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:45.688752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:45.699593   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:45.699615   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:45.699621   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:45.704272   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:45.704280   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:45.743534   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:45.743544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:45.754846   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:45.754856   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:45.774302   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:45.774313   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:45.786246   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:45.786257   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:45.800699   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:45.800709   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:45.821449   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:45.821459   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:45.846389   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:45.846401   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:45.888101   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:45.888111   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:45.924946   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:45.924958   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:45.937377   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:45.937390   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:45.949305   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:45.949318   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:45.965883   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:45.965894   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:45.981085   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:45.981097   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:45.995698   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:45.995706   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:46.008408   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:46.008422   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:48.522877   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:53.525315   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:53.525548   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:53.543303   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:53.543392   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:53.558080   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:53.558151   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:53.569678   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:53.569753   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:53.580711   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:53.580792   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:53.591191   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:53.591265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:53.601838   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:53.601918   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:53.618696   12818 logs.go:276] 0 containers: []
	W0318 04:06:53.618707   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:53.618769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:53.628804   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:53.628824   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:53.628830   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:53.633155   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:53.633162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:53.671524   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:53.671535   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:53.686237   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:53.686247   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:53.701790   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:53.701801   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:53.725633   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:53.725644   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:53.764452   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:53.764461   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:53.778698   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:53.778708   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:53.790656   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:53.790668   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:53.802576   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:53.802587   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:53.821345   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:53.821355   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:53.835129   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:53.835139   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:53.848932   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:53.848939   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:53.862730   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:53.862741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:53.875308   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:53.875322   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:53.903406   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:53.903418   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:53.922053   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:53.922062   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:56.469528   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:01.472245   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:01.472734   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:01.512039   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:01.512183   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:01.533202   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:01.533300   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:01.548794   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:01.548872   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:01.560938   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:01.561014   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:01.574475   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:01.574553   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:01.585658   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:01.585729   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:01.596210   12818 logs.go:276] 0 containers: []
	W0318 04:07:01.596224   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:01.596290   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:01.607243   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:01.607259   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:01.607264   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:01.612076   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:01.612084   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:01.623471   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:01.623486   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:01.647634   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:01.647642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:01.668005   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:01.668016   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:01.683355   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:01.683365   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:01.695964   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:01.695974   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:01.714056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:01.714069   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:01.729608   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:01.729617   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:01.745816   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:01.745827   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:01.789988   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:01.789999   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:01.808820   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:01.808831   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:01.822036   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:01.822048   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:01.834528   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:01.834541   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:01.847468   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:01.847483   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:01.908409   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:01.908419   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:01.923009   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:01.923020   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:04.472079   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:09.474338   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:09.474780   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:09.513929   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:09.514070   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:09.535472   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:09.535598   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:09.550800   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:09.550870   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:09.563717   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:09.563794   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:09.579427   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:09.579500   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:09.590646   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:09.590718   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:09.601600   12818 logs.go:276] 0 containers: []
	W0318 04:07:09.601613   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:09.601672   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:09.612045   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:09.612064   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:09.612069   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:09.648971   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:09.648981   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:09.664686   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:09.664699   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:09.679989   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:09.680001   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:09.692197   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:09.692205   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:09.696723   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:09.696742   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:09.709200   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:09.709216   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:09.727788   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:09.727797   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:09.740578   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:09.740589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:09.753072   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:09.753083   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:09.768891   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:09.768902   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:09.796124   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:09.796138   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:09.838109   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:09.838128   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:09.853302   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:09.853315   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:09.894799   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:09.894810   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:09.907826   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:09.907837   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:09.923466   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:09.923477   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:12.437350   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:17.439634   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:17.439856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:17.460026   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:17.460132   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:17.475320   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:17.475398   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:17.487487   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:17.487557   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:17.498212   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:17.498290   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:17.508806   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:17.508878   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:17.520094   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:17.520164   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:17.529992   12818 logs.go:276] 0 containers: []
	W0318 04:07:17.530002   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:17.530056   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:17.540544   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:17.540560   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:17.540565   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:17.579821   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:17.579834   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:17.600487   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:17.600495   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:17.613873   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:17.613884   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:17.627376   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:17.627388   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:17.666340   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:17.666354   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:17.682209   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:17.682221   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:17.699102   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:17.699110   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:17.711620   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:17.711632   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:17.753813   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:17.753825   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:17.768657   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:17.768670   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:17.785209   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:17.785226   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:17.798347   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:17.798358   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:17.825869   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:17.825883   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:17.830961   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:17.830968   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:17.842967   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:17.842978   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:17.863606   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:17.863614   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:20.394018   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:25.395888   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:25.396188   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:25.424906   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:25.425034   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:25.442376   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:25.442486   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:25.455698   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:25.455769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:25.467346   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:25.467421   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:25.478440   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:25.478509   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:25.490170   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:25.490233   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:25.510985   12818 logs.go:276] 0 containers: []
	W0318 04:07:25.510994   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:25.511025   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:25.522359   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:25.522379   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:25.522384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:25.534863   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:25.534880   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:25.559289   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:25.559303   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:25.598156   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:25.598170   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:25.640557   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:25.640567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:25.655657   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:25.655669   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:25.670930   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:25.670941   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:25.691564   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:25.691575   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:25.710933   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:25.710948   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:25.754365   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:25.754379   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:25.759386   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:25.759396   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:25.772440   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:25.772454   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:25.789584   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:25.789594   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:25.802150   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:25.802162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:25.820119   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:25.820137   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:25.835506   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:25.835518   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:25.847440   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:25.847455   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:28.362185   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:33.364361   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:33.364515   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:33.378401   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:33.378482   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:33.390966   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:33.391041   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:33.406474   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:33.406547   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:33.418429   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:33.418503   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:33.429790   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:33.429863   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:33.441047   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:33.441124   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:33.452374   12818 logs.go:276] 0 containers: []
	W0318 04:07:33.452388   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:33.452415   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:33.465912   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:33.465929   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:33.465934   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:33.478565   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:33.478582   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:33.483047   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:33.483057   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:33.497843   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:33.497854   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:33.516519   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:33.516532   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:33.532439   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:33.532451   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:33.574704   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:33.574717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:33.590021   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:33.590029   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:33.602315   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:33.602328   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:33.648836   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:33.648853   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:33.687100   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:33.687118   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:33.703384   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:33.703397   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:33.720937   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:33.720951   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:33.733874   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:33.733889   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:33.749087   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:33.749104   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:33.768820   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:33.768832   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:33.788502   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:33.788514   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:36.315152   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:41.317364   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:41.317502   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:41.336098   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:41.336192   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:41.350829   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:41.350905   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:41.363518   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:41.363588   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:41.375104   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:41.375175   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:41.386907   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:41.386978   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:41.398636   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:41.398705   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:41.411473   12818 logs.go:276] 0 containers: []
	W0318 04:07:41.411485   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:41.411545   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:41.422744   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:41.422762   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:41.422767   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:41.439778   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:41.439787   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:41.467048   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:41.467059   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:41.510758   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:41.510772   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:41.516082   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:41.516091   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:41.533714   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:41.533726   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:41.546850   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:41.546861   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:41.562884   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:41.562896   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:41.575237   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:41.575245   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:41.615390   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:41.615404   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:41.628494   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:41.628506   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:41.643526   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:41.643544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:41.684100   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:41.684111   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:41.702194   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:41.702206   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:41.714375   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:41.714384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:41.733349   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:41.733362   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:41.745385   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:41.745399   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:44.267873   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:49.269966   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:49.270086   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:49.288641   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:49.288719   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:49.302724   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:49.302782   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:49.314147   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:49.314185   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:49.325690   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:49.325729   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:49.336886   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:49.336964   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:49.348289   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:49.348360   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:49.360198   12818 logs.go:276] 0 containers: []
	W0318 04:07:49.360212   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:49.360278   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:49.372282   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:49.372303   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:49.372309   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:49.377139   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:49.377148   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:49.419904   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:49.419915   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:49.461799   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:49.461819   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:49.477214   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:49.477228   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:49.493198   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:49.493206   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:49.505862   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:49.505874   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:49.518842   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:49.518855   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:49.531727   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:49.531737   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:49.547533   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:49.547544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:49.560399   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:49.560412   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:49.573134   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:49.573145   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:49.596943   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:49.596958   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:49.638619   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:49.638631   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:49.653508   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:49.653524   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:49.665194   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:49.665203   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:49.683821   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:49.683833   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:52.201726   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:57.202635   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:57.202682   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:57.214438   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:57.214477   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:57.225919   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:57.225996   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:57.238811   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:57.238885   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:57.249922   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:57.249996   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:57.265893   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:57.265964   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:57.279182   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:57.279257   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:57.291027   12818 logs.go:276] 0 containers: []
	W0318 04:07:57.291041   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:57.291110   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:57.303109   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:57.303128   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:57.303135   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:57.342320   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:57.342338   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:57.354754   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:57.354767   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:57.367909   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:57.367923   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:57.384032   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:57.384044   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:57.395933   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:57.395949   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:57.432775   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:57.432787   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:57.447471   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:57.447484   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:57.460811   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:57.460824   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:57.477116   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:57.477129   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:57.501925   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:57.501938   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:57.514819   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:57.514833   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:57.519348   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:57.519361   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:57.534686   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:57.534698   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:57.554255   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:57.554272   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:57.596056   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:57.596069   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:57.607987   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:57.607998   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:08:00.123661   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:05.125771   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:05.125855   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:05.137361   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:08:05.137442   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:05.152070   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:08:05.152144   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:05.164219   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:08:05.164293   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:05.181541   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:08:05.181617   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:05.192752   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:08:05.192821   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:05.204541   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:08:05.204614   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:05.215777   12818 logs.go:276] 0 containers: []
	W0318 04:08:05.215787   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:05.215850   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:05.231199   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:08:05.231216   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:05.231222   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:05.273964   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:05.273987   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:05.278872   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:08:05.278885   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:08:05.294557   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:08:05.294569   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:08:05.337100   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:08:05.337112   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:08:05.351540   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:08:05.351553   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:08:05.368005   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:08:05.368013   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:08:05.381978   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:08:05.381992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:08:05.396927   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:08:05.396937   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:08:05.409869   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:08:05.409881   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:08:05.425953   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:08:05.425964   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:08:05.439580   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:08:05.439595   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:08:05.459339   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:08:05.459347   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:08:05.472109   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:05.472121   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:05.496419   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:05.496437   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:05.532511   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:08:05.532524   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:08:05.551059   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:08:05.551071   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:08.062871   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:13.065093   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:13.065147   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:13.077307   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:08:13.077379   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:13.089575   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:08:13.089644   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:13.101825   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:08:13.101897   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:13.116193   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:08:13.116271   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:13.127525   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:08:13.127598   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:13.139106   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:08:13.139181   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:13.153094   12818 logs.go:276] 0 containers: []
	W0318 04:08:13.153106   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:13.153168   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:13.164163   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:08:13.164180   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:08:13.164188   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:08:13.176808   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:08:13.176820   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:13.190031   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:13.190042   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:13.195143   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:08:13.195154   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:08:13.211457   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:08:13.211468   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:08:13.224425   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:08:13.224435   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:08:13.240248   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:08:13.240256   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:08:13.253462   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:08:13.253472   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:08:13.269367   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:08:13.269384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:08:13.292705   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:13.292729   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:13.316581   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:13.316595   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:13.358107   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:08:13.358126   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:08:13.399225   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:08:13.399239   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:08:13.415685   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:08:13.415698   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:08:13.427705   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:13.427718   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:13.462613   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:08:13.462624   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:08:13.474126   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:08:13.474139   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:08:15.991676   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:20.993971   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:20.994009   12818 kubeadm.go:591] duration metric: took 4m5.056468333s to restartPrimaryControlPlane
	W0318 04:08:20.994043   12818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:08:20.994057   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:08:22.039893   12818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.045848917s)
	I0318 04:08:22.039972   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:08:22.044993   12818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:08:22.047915   12818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:08:22.050537   12818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:08:22.050543   12818 kubeadm.go:156] found existing configuration files:
	
	I0318 04:08:22.050566   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf
	I0318 04:08:22.053353   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:08:22.053380   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:08:22.055880   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf
	I0318 04:08:22.058567   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:08:22.058592   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:08:22.062050   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf
	I0318 04:08:22.064637   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:08:22.064656   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:08:22.067047   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf
	I0318 04:08:22.069515   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:08:22.069539   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:08:22.071962   12818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:08:22.087684   12818 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:08:22.087713   12818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:08:22.148760   12818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:08:22.148812   12818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:08:22.148876   12818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:08:22.201192   12818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:08:22.205202   12818 out.go:204]   - Generating certificates and keys ...
	I0318 04:08:22.205244   12818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:08:22.205285   12818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:08:22.205357   12818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:08:22.205408   12818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:08:22.205461   12818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:08:22.205611   12818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:08:22.205657   12818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:08:22.205689   12818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:08:22.205733   12818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:08:22.205779   12818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:08:22.205806   12818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:08:22.205852   12818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:08:22.300092   12818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:08:22.391764   12818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:08:22.424820   12818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:08:22.544721   12818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:08:22.572654   12818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:08:22.572939   12818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:08:22.572977   12818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:08:22.658487   12818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:08:22.665543   12818 out.go:204]   - Booting up control plane ...
	I0318 04:08:22.665596   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:08:22.665637   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:08:22.665677   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:08:22.665724   12818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:08:22.665807   12818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:08:27.664501   12818 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.002330 seconds
	I0318 04:08:27.664595   12818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:08:27.669974   12818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:08:28.177818   12818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:08:28.178082   12818 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-727000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:08:28.684052   12818 kubeadm.go:309] [bootstrap-token] Using token: 7fi7vs.pubje3x3gw34rkjb
	I0318 04:08:28.687174   12818 out.go:204]   - Configuring RBAC rules ...
	I0318 04:08:28.687243   12818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:08:28.687303   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:08:28.690063   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:08:28.695348   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:08:28.696320   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:08:28.697166   12818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:08:28.700584   12818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:08:28.877327   12818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:08:29.089808   12818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:08:29.090420   12818 kubeadm.go:309] 
	I0318 04:08:29.090456   12818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:08:29.090459   12818 kubeadm.go:309] 
	I0318 04:08:29.090495   12818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:08:29.090499   12818 kubeadm.go:309] 
	I0318 04:08:29.090511   12818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:08:29.090539   12818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:08:29.090680   12818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:08:29.090699   12818 kubeadm.go:309] 
	I0318 04:08:29.090754   12818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:08:29.090762   12818 kubeadm.go:309] 
	I0318 04:08:29.090790   12818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:08:29.090793   12818 kubeadm.go:309] 
	I0318 04:08:29.090820   12818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:08:29.090887   12818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:08:29.090967   12818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:08:29.090978   12818 kubeadm.go:309] 
	I0318 04:08:29.091040   12818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:08:29.091085   12818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:08:29.091088   12818 kubeadm.go:309] 
	I0318 04:08:29.091127   12818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7fi7vs.pubje3x3gw34rkjb \
	I0318 04:08:29.091182   12818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 \
	I0318 04:08:29.091195   12818 kubeadm.go:309] 	--control-plane 
	I0318 04:08:29.091198   12818 kubeadm.go:309] 
	I0318 04:08:29.091243   12818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:08:29.091251   12818 kubeadm.go:309] 
	I0318 04:08:29.091292   12818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7fi7vs.pubje3x3gw34rkjb \
	I0318 04:08:29.091404   12818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 
	I0318 04:08:29.091453   12818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:08:29.091461   12818 cni.go:84] Creating CNI manager for ""
	I0318 04:08:29.091468   12818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:08:29.094926   12818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:08:29.102059   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:08:29.106361   12818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:08:29.112320   12818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:08:29.112456   12818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-727000 minikube.k8s.io/updated_at=2024_03_18T04_08_29_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=running-upgrade-727000 minikube.k8s.io/primary=true
	I0318 04:08:29.112469   12818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:08:29.159654   12818 ops.go:34] apiserver oom_adj: -16
	I0318 04:08:29.159691   12818 kubeadm.go:1107] duration metric: took 47.345958ms to wait for elevateKubeSystemPrivileges
	W0318 04:08:29.160013   12818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:08:29.160056   12818 kubeadm.go:393] duration metric: took 4m13.237048625s to StartCluster
	I0318 04:08:29.160113   12818 settings.go:142] acquiring lock: {Name:mk6b2370db0877d87697e89b229a47432418434b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:08:29.160283   12818 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:08:29.161708   12818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:08:29.162002   12818 config.go:182] Loaded profile config "running-upgrade-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:08:29.162050   12818 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:08:29.165982   12818 out.go:177] * Verifying Kubernetes components...
	I0318 04:08:29.162236   12818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:08:29.173932   12818 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-727000"
	I0318 04:08:29.173954   12818 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-727000"
	I0318 04:08:29.173962   12818 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-727000"
	W0318 04:08:29.173970   12818 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:08:29.173971   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:08:29.173976   12818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-727000"
	I0318 04:08:29.174020   12818 host.go:66] Checking if "running-upgrade-727000" exists ...
	I0318 04:08:29.175411   12818 kapi.go:59] client config for running-upgrade-727000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106332a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:08:29.175549   12818 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-727000"
	W0318 04:08:29.175555   12818 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:08:29.175565   12818 host.go:66] Checking if "running-upgrade-727000" exists ...
	I0318 04:08:29.179986   12818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:08:29.183124   12818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:08:29.183136   12818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:08:29.183151   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:08:29.184211   12818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:08:29.184218   12818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:08:29.184223   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:08:29.267317   12818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:08:29.273351   12818 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:08:29.273413   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:08:29.277773   12818 api_server.go:72] duration metric: took 115.708125ms to wait for apiserver process to appear ...
	I0318 04:08:29.277788   12818 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:08:29.277796   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:29.289074   12818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:08:29.289089   12818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:08:34.278800   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:34.278851   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:39.279646   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:39.279671   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:44.279794   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:44.279834   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:49.280097   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:49.280144   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:54.280498   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:54.280547   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:59.280991   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:59.281042   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:08:59.640067   12818 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:08:59.644321   12818 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:08:59.652225   12818 addons.go:505] duration metric: took 30.4909315s for enable addons: enabled=[storage-provisioner]
	I0318 04:09:04.281790   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:04.281811   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:09.282585   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:09.282628   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:14.282844   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:14.282889   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:19.284105   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:19.284132   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:24.285594   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:24.285633   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:29.287525   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:29.287626   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:29.300693   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:29.300768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:29.311170   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:29.311243   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:29.322097   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:29.322175   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:29.332154   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:29.332222   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:29.342859   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:29.342937   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:29.353163   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:29.353238   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:29.363429   12818 logs.go:276] 0 containers: []
	W0318 04:09:29.363441   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:29.363501   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:29.374171   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:29.374186   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:29.374192   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:29.378888   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:29.378894   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:29.394064   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:29.394076   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:29.405929   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:29.405944   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:29.420963   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:29.420974   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:29.443806   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:29.443813   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:29.455125   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:29.455138   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:29.467034   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:29.467048   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:29.503358   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:29.503367   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:29.538111   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:29.538125   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:29.555078   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:29.555088   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:29.566773   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:29.566787   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:29.578360   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:29.578372   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:32.100067   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:37.102278   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:37.102429   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:37.118540   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:37.118632   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:37.138002   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:37.138071   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:37.148577   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:37.148651   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:37.160339   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:37.160416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:37.170523   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:37.170597   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:37.181619   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:37.181687   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:37.192507   12818 logs.go:276] 0 containers: []
	W0318 04:09:37.192521   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:37.192583   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:37.203684   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:37.203703   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:37.203710   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:37.215557   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:37.215567   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:37.240635   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:37.240645   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:37.275542   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:37.275555   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:37.290187   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:37.290198   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:37.307909   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:37.307920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:37.321849   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:37.321861   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:37.340638   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:37.340650   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:37.352547   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:37.352558   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:37.367438   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:37.367450   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:37.378532   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:37.378546   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:37.415360   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:37.415368   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:37.419705   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:37.419715   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:39.936266   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:44.938555   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:44.938852   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:44.979301   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:44.979435   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:45.008250   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:45.008327   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:45.022273   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:45.022345   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:45.033760   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:45.033829   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:45.045503   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:45.045580   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:45.056466   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:45.056531   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:45.067021   12818 logs.go:276] 0 containers: []
	W0318 04:09:45.067036   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:45.067095   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:45.078722   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:45.078739   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:45.078744   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:45.114603   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:45.114614   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:45.128719   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:45.128730   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:45.140118   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:45.140129   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:45.151819   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:45.151830   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:45.169891   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:45.169902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:45.182107   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:45.182119   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:45.221571   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:45.221582   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:45.226692   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:45.226699   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:45.242631   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:45.242642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:45.253659   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:45.253670   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:45.268968   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:45.268978   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:45.292146   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:45.292155   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:47.806020   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:52.808242   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:52.808453   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:52.825177   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:52.825255   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:52.837705   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:52.837776   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:52.849230   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:52.849305   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:52.859813   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:52.859882   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:52.870576   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:52.870639   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:52.895048   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:52.895127   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:52.907251   12818 logs.go:276] 0 containers: []
	W0318 04:09:52.907266   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:52.907331   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:52.918300   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:52.918316   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:52.918320   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:52.953875   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:52.953886   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:52.958787   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:52.958794   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:52.997850   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:52.997862   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:53.016479   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:53.016490   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:53.029281   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:53.029293   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:53.049580   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:53.049591   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:53.061240   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:53.061252   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:53.086026   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:53.086040   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:53.099829   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:53.099840   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:53.114119   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:53.114130   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:53.125920   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:53.125931   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:53.140591   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:53.140601   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:55.654662   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:00.655404   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:00.655679   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:00.682674   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:00.682789   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:00.698183   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:00.698265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:00.711530   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:00.711617   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:00.722548   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:00.722625   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:00.733400   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:00.733477   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:00.743932   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:00.743993   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:00.754433   12818 logs.go:276] 0 containers: []
	W0318 04:10:00.754451   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:00.754510   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:00.765327   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:00.765342   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:00.765347   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:00.777432   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:00.777444   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:00.794525   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:00.794535   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:00.811900   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:00.811911   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:00.835975   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:00.835986   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:00.872857   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:00.872867   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:00.877382   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:00.877389   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:00.891731   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:00.891741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:00.903534   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:00.903545   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:00.915243   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:00.915254   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:00.926689   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:00.926701   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:00.938380   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:00.938391   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:00.975711   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:00.975723   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:03.496353   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:08.492493   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:08.492576   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:08.503954   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:08.504024   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:08.516083   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:08.516158   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:08.526790   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:08.526861   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:08.542478   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:08.542550   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:08.554626   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:08.554705   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:08.566890   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:08.566961   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:08.578064   12818 logs.go:276] 0 containers: []
	W0318 04:10:08.578076   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:08.578136   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:08.589448   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:08.589464   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:08.589470   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:08.601755   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:08.601766   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:08.640289   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:08.640308   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:08.681650   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:08.681658   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:08.696758   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:08.696768   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:08.711369   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:08.711381   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:08.727688   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:08.727699   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:08.751135   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:08.751151   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:08.762652   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:08.762663   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:08.767449   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:08.767458   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:08.786328   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:08.786339   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:08.801406   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:08.801417   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:08.813372   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:08.813383   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:11.328663   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:16.323169   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:16.323311   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:16.335020   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:16.335092   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:16.346979   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:16.347047   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:16.359850   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:16.359940   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:16.371211   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:16.371280   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:16.382630   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:16.382706   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:16.394419   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:16.394490   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:16.405281   12818 logs.go:276] 0 containers: []
	W0318 04:10:16.405292   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:16.405348   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:16.416658   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:16.416674   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:16.416681   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:16.433109   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:16.433125   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:16.447091   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:16.447105   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:16.472971   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:16.472980   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:16.513348   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:16.513370   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:16.518979   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:16.518992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:16.535975   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:16.535986   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:16.548288   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:16.548299   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:16.560331   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:16.560342   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:16.573076   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:16.573088   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:16.611920   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:16.611929   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:16.626758   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:16.626771   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:16.644979   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:16.644991   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:19.156115   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:24.153086   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:24.153167   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:24.164593   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:24.164664   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:24.175832   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:24.175899   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:24.187896   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:24.187972   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:24.199906   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:24.199984   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:24.210869   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:24.210944   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:24.227375   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:24.227449   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:24.240836   12818 logs.go:276] 0 containers: []
	W0318 04:10:24.240847   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:24.240910   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:24.257689   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:24.257706   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:24.257712   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:24.270495   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:24.270503   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:24.284569   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:24.284584   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:24.296741   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:24.296753   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:24.301955   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:24.301967   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:24.339140   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:24.339150   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:24.354040   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:24.354052   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:24.368705   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:24.368717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:24.383461   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:24.383472   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:24.395148   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:24.395158   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:24.431194   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:24.431201   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:24.445682   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:24.445692   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:24.464106   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:24.464119   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:26.989566   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:31.987022   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:31.987108   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:31.998709   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:31.998783   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:32.011462   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:32.011532   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:32.022866   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:32.022946   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:32.035089   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:32.035165   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:32.047058   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:32.047133   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:32.058619   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:32.058685   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:32.070008   12818 logs.go:276] 0 containers: []
	W0318 04:10:32.070020   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:32.070087   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:32.081798   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:32.081814   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:32.081820   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:32.094509   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:32.094518   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:32.107302   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:32.107315   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:32.146457   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:32.146472   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:32.151561   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:32.151572   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:32.167178   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:32.167197   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:32.180404   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:32.180417   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:32.193335   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:32.193348   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:32.216431   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:32.216441   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:32.241904   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:32.241912   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:32.277046   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:32.277061   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:32.292083   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:32.292095   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:32.303986   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:32.303996   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:34.823347   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:39.823710   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:39.823787   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:39.835434   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:39.835507   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:39.846491   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:39.846558   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:39.857662   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:39.857728   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:39.869426   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:39.869502   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:39.880601   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:39.880681   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:39.892789   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:39.892863   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:39.904615   12818 logs.go:276] 0 containers: []
	W0318 04:10:39.904630   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:39.904694   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:39.915660   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:39.915676   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:39.915682   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:39.954622   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:39.954633   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:39.991618   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:39.991625   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:40.007643   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:40.007660   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:40.020086   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:40.020098   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:40.033175   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:40.033185   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:40.051306   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:40.051316   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:40.062715   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:40.062725   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:40.067119   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:40.067128   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:40.081240   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:40.081251   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:40.093238   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:40.093250   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:40.107586   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:40.107597   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:40.119524   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:40.119533   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:42.644836   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:47.645890   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:47.645973   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:47.658157   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:47.658226   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:47.669965   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:47.670035   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:47.681768   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:10:47.681840   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:47.693221   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:47.693286   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:47.705550   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:47.705625   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:47.717282   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:47.717347   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:47.728203   12818 logs.go:276] 0 containers: []
	W0318 04:10:47.728220   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:47.728282   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:47.740259   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:47.740277   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:47.740282   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:47.781385   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:47.781398   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:47.786791   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:10:47.786803   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:10:47.799028   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:47.799039   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:47.811659   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:47.811670   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:47.823940   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:47.823952   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:47.864178   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:47.864192   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:47.879055   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:10:47.879070   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:10:47.891054   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:47.891068   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:47.902809   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:47.902819   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:47.914630   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:47.914640   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:47.939396   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:47.939403   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:47.953610   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:47.953623   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:47.968887   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:47.968902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:47.990659   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:47.990673   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:50.504349   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:55.503906   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:55.503988   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:55.515496   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:55.515568   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:55.526843   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:55.526910   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:55.538038   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:10:55.538118   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:55.550150   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:55.550230   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:55.561640   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:55.561711   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:55.573292   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:55.573369   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:55.585483   12818 logs.go:276] 0 containers: []
	W0318 04:10:55.585516   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:55.585590   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:55.597613   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:55.597629   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:55.597635   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:55.602954   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:10:55.602965   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:10:55.615703   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:55.615713   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:55.642509   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:55.642527   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:55.656413   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:55.656425   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:55.668984   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:55.668995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:55.696834   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:55.696853   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:55.714030   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:55.714042   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:55.751630   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:10:55.751653   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:10:55.763247   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:55.763259   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:55.775095   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:55.775106   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:55.789175   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:55.789187   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:55.802879   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:55.802891   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:55.820501   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:55.820513   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:55.855071   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:55.855085   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:58.368919   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:03.370592   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:03.370727   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:03.382292   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:03.382370   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:03.393731   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:03.393803   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:03.405086   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:03.405161   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:03.416759   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:03.416830   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:03.431008   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:03.431084   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:03.443120   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:03.443196   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:03.454356   12818 logs.go:276] 0 containers: []
	W0318 04:11:03.454366   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:03.454424   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:03.470256   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:03.470272   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:03.470277   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:03.485201   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:03.485213   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:03.497900   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:03.497918   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:03.510052   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:03.510064   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:03.523696   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:03.523707   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:03.549439   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:03.549456   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:03.589332   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:03.589343   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:03.603805   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:03.603818   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:03.615592   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:03.615604   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:03.631039   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:03.631050   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:03.635810   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:03.635817   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:03.670983   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:03.670992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:03.683247   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:03.683258   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:03.701866   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:03.701877   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:03.719714   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:03.719724   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:06.233355   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:11.235171   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:11.235267   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:11.246699   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:11.246775   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:11.257772   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:11.257840   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:11.269675   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:11.269751   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:11.284928   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:11.284998   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:11.296609   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:11.296678   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:11.307149   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:11.307218   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:11.318254   12818 logs.go:276] 0 containers: []
	W0318 04:11:11.318266   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:11.318323   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:11.329345   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:11.329360   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:11.329364   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:11.345025   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:11.345037   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:11.371592   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:11.371606   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:11.410183   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:11.410194   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:11.428429   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:11.428441   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:11.442827   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:11.442838   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:11.454647   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:11.454657   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:11.492433   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:11.492442   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:11.509755   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:11.509765   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:11.525730   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:11.525740   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:11.543182   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:11.543193   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:11.547803   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:11.547812   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:11.561918   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:11.561930   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:11.573766   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:11.573775   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:11.585543   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:11.585556   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:14.099348   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:19.100337   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:19.100412   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:19.111808   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:19.111893   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:19.123613   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:19.123685   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:19.142013   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:19.142089   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:19.153610   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:19.153689   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:19.165328   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:19.165409   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:19.177427   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:19.177497   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:19.189960   12818 logs.go:276] 0 containers: []
	W0318 04:11:19.189981   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:19.190044   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:19.202780   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:19.202798   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:19.202804   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:19.219815   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:19.219828   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:19.233808   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:19.233820   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:19.253135   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:19.253148   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:19.293560   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:19.293573   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:19.306209   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:19.306223   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:19.318138   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:19.318149   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:19.330106   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:19.330116   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:19.341750   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:19.341760   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:19.354129   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:19.354143   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:19.359032   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:19.359042   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:19.373610   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:19.373620   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:19.385052   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:19.385061   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:19.409749   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:19.409756   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:19.448173   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:19.448222   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:21.966148   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:26.968272   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:26.968350   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:26.983462   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:26.983521   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:26.995078   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:26.995139   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:27.006594   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:27.006651   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:27.017802   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:27.017859   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:27.031673   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:27.031732   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:27.042731   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:27.042790   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:27.054273   12818 logs.go:276] 0 containers: []
	W0318 04:11:27.054284   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:27.054340   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:27.072026   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:27.072042   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:27.072047   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:27.087151   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:27.087167   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:27.106573   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:27.106584   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:27.118820   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:27.118829   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:27.145864   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:27.145875   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:27.185495   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:27.185507   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:27.200009   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:27.200021   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:27.211949   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:27.211962   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:27.223914   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:27.223925   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:27.262090   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:27.262104   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:27.280245   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:27.280257   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:27.297451   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:27.297462   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:27.309560   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:27.309571   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:27.314538   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:27.314548   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:27.326379   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:27.326391   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:29.840392   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:34.842422   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:34.842504   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:34.859004   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:34.859079   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:34.870837   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:34.870911   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:34.884122   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:34.884199   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:34.899131   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:34.899203   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:34.909749   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:34.909808   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:34.922828   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:34.922898   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:34.934152   12818 logs.go:276] 0 containers: []
	W0318 04:11:34.934165   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:34.934227   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:34.947816   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:34.947835   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:34.947840   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:34.961156   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:34.961167   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:34.977997   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:34.978010   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:34.991100   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:34.991116   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:35.030364   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:35.030377   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:35.035506   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:35.035520   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:35.071126   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:35.071137   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:35.086355   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:35.086368   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:35.098359   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:35.098369   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:35.113497   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:35.113508   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:35.125259   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:35.125273   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:35.149222   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:35.149232   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:35.162945   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:35.162955   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:35.174901   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:35.174910   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:35.186176   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:35.186187   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:37.706379   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:42.707682   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:42.707775   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:42.719391   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:42.719459   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:42.731312   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:42.731385   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:42.743018   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:42.743096   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:42.754898   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:42.754972   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:42.766628   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:42.766700   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:42.778886   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:42.778955   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:42.790272   12818 logs.go:276] 0 containers: []
	W0318 04:11:42.790286   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:42.790351   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:42.801667   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:42.801686   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:42.801693   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:42.813957   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:42.813966   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:42.837449   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:42.837460   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:42.842349   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:42.842355   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:42.855321   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:42.855332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:42.868582   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:42.868593   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:42.883287   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:42.883299   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:42.900332   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:42.900343   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:42.923163   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:42.923175   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:42.941292   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:42.941303   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:42.952638   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:42.952650   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:42.977304   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:42.977311   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:43.014532   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:43.014540   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:43.051580   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:43.051589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:43.066167   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:43.066177   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:45.590258   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:50.592313   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:50.592420   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:50.603759   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:50.603834   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:50.615359   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:50.615437   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:50.626500   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:50.626578   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:50.641786   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:50.641856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:50.653404   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:50.653473   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:50.665244   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:50.665311   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:50.675629   12818 logs.go:276] 0 containers: []
	W0318 04:11:50.675644   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:50.675714   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:50.687651   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:50.687669   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:50.687674   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:50.703290   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:50.703306   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:50.718798   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:50.718810   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:50.737587   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:50.737600   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:50.777245   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:50.777258   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:50.790107   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:50.790120   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:50.807055   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:50.807068   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:50.819757   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:50.819770   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:50.833878   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:50.833890   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:50.846907   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:50.846919   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:50.852154   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:50.852162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:50.866946   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:50.866957   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:50.878903   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:50.878914   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:50.915024   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:50.915036   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:50.929686   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:50.929695   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:53.454897   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:58.454971   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:58.455064   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:58.466343   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:58.466416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:58.477869   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:58.477943   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:58.489299   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:58.489399   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:58.500487   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:58.500553   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:58.511885   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:58.511962   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:58.523385   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:58.523466   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:58.534601   12818 logs.go:276] 0 containers: []
	W0318 04:11:58.534614   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:58.534677   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:58.552978   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:58.552991   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:58.552995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:58.593071   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:58.593093   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:58.609303   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:58.609315   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:58.622099   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:58.622111   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:58.634370   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:58.634380   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:58.653557   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:58.653564   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:58.658567   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:58.658578   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:58.676728   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:58.676738   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:58.688239   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:58.688253   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:58.701321   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:58.701333   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:58.737657   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:58.737667   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:58.751702   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:58.751712   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:58.775667   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:58.775676   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:58.787116   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:58.787127   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:58.798897   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:58.798908   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:01.312072   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:06.314131   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:06.314200   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:06.325841   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:06.325917   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:06.337687   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:06.337768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:06.349170   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:06.349247   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:06.360358   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:06.360430   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:06.376241   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:06.376312   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:06.387668   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:06.387742   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:06.399052   12818 logs.go:276] 0 containers: []
	W0318 04:12:06.399061   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:06.399123   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:06.410487   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:06.410507   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:06.410513   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:06.423362   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:06.423374   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:06.436795   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:06.436806   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:06.448858   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:06.448871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:06.464503   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:06.464511   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:06.479549   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:06.479559   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:06.491792   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:06.491806   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:06.504717   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:06.504729   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:06.526094   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:06.526106   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:06.553504   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:06.553518   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:06.594384   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:06.594394   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:06.632230   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:06.632244   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:06.645146   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:06.645157   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:06.656810   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:06.656821   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:06.661471   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:06.661478   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:09.178939   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:14.180901   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:14.181023   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:14.197325   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:14.197417   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:14.210827   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:14.210899   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:14.222595   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:14.222668   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:14.234191   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:14.234262   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:14.246357   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:14.246422   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:14.258420   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:14.258488   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:14.269527   12818 logs.go:276] 0 containers: []
	W0318 04:12:14.269538   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:14.269596   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:14.293308   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:14.293323   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:14.293328   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:14.318265   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:14.318277   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:14.331077   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:14.331089   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:14.356536   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:14.356549   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:14.369445   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:14.369458   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:14.381631   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:14.381642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:14.394531   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:14.394545   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:14.413388   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:14.413404   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:14.425875   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:14.425884   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:14.430878   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:14.430889   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:14.469163   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:14.469176   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:14.485550   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:14.485567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:14.497889   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:14.497901   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:14.536988   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:14.537001   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:14.555495   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:14.555505   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:17.068844   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:22.070959   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:22.071057   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:22.082686   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:22.082757   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:22.097198   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:22.097272   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:22.108190   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:22.108233   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:22.120414   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:22.120490   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:22.132335   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:22.132407   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:22.143872   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:22.143940   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:22.158082   12818 logs.go:276] 0 containers: []
	W0318 04:12:22.158093   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:22.158153   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:22.169613   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:22.169630   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:22.169636   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:22.182160   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:22.182171   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:22.194998   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:22.195008   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:22.207902   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:22.207915   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:22.247726   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:22.247740   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:22.262890   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:22.262902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:22.280884   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:22.280892   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:22.318913   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:22.318926   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:22.331815   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:22.331824   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:22.356409   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:22.356426   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:22.369300   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:22.369318   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:22.392322   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:22.392334   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:22.405967   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:22.405982   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:22.422695   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:22.422711   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:22.427890   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:22.427902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:24.944912   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:29.947066   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:29.951664   12818 out.go:177] 
	W0318 04:12:29.955509   12818 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:12:29.955524   12818 out.go:239] * 
	* 
	W0318 04:12:29.956741   12818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:12:29.963494   12818 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-727000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-18 04:12:30.052386 -0700 PDT m=+1380.499413792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-727000 -n running-upgrade-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-727000 -n running-upgrade-727000: exit status 2 (15.671326708s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-727000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo cat                            | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo cat                            | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo cat                            | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo cat                            | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo                                | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo find                           | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-466000 sudo crio                           | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-466000                                     | cilium-466000             | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:02 PDT |
	| start   | -p kubernetes-upgrade-294000                         | kubernetes-upgrade-294000 | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-654000                             | offline-docker-654000     | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:02 PDT |
	| stop    | -p kubernetes-upgrade-294000                         | kubernetes-upgrade-294000 | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:02 PDT |
	| start   | -p stopped-upgrade-484000                            | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:04 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-294000                         | kubernetes-upgrade-294000 | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-294000                         | kubernetes-upgrade-294000 | jenkins | v1.32.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:02 PDT |
	| start   | -p running-upgrade-727000                            | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:02 PDT | 18 Mar 24 04:03 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-727000                            | running-upgrade-727000    | jenkins | v1.32.0 | 18 Mar 24 04:03 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-484000 stop                          | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:04 PDT | 18 Mar 24 04:04 PDT |
	| start   | -p stopped-upgrade-484000                            | stopped-upgrade-484000    | jenkins | v1.32.0 | 18 Mar 24 04:04 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:04:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:04:26.545191   12861 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:04:26.545333   12861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:04:26.545337   12861 out.go:304] Setting ErrFile to fd 2...
	I0318 04:04:26.545341   12861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:04:26.545496   12861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:04:26.546799   12861 out.go:298] Setting JSON to false
	I0318 04:04:26.565187   12861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7439,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:04:26.565260   12861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:04:26.570135   12861 out.go:177] * [stopped-upgrade-484000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:04:26.578149   12861 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:04:26.578201   12861 notify.go:220] Checking for updates...
	I0318 04:04:26.586127   12861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:04:26.589139   12861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:04:26.592034   12861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:04:26.595082   12861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:04:26.598125   12861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:04:26.601432   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:04:26.605066   12861 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:04:26.608088   12861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:04:26.611953   12861 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:04:26.619104   12861 start.go:297] selected driver: qemu2
	I0318 04:04:26.619110   12861 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:26.619171   12861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:04:26.621944   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:04:26.621967   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:04:26.621995   12861 start.go:340] cluster config:
	{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:26.622043   12861 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:04:26.629086   12861 out.go:177] * Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	I0318 04:04:26.633147   12861 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:04:26.633162   12861 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:04:26.633176   12861 cache.go:56] Caching tarball of preloaded images
	I0318 04:04:26.633228   12861 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:04:26.633235   12861 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:04:26.633290   12861 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0318 04:04:26.633770   12861 start.go:360] acquireMachinesLock for stopped-upgrade-484000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:04:26.633798   12861 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "stopped-upgrade-484000"
	I0318 04:04:26.633806   12861 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:04:26.633811   12861 fix.go:54] fixHost starting: 
	I0318 04:04:26.633915   12861 fix.go:112] recreateIfNeeded on stopped-upgrade-484000: state=Stopped err=<nil>
	W0318 04:04:26.633926   12861 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:04:26.638115   12861 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	I0318 04:04:23.283923   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:23.283944   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:26.646123   12861 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52306-:22,hostfwd=tcp::52307-:2376,hostname=stopped-upgrade-484000 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/disk.qcow2
	I0318 04:04:26.695491   12861 main.go:141] libmachine: STDOUT: 
	I0318 04:04:26.695520   12861 main.go:141] libmachine: STDERR: 
	I0318 04:04:26.695526   12861 main.go:141] libmachine: Waiting for VM to start (ssh -p 52306 docker@127.0.0.1)...
	I0318 04:04:28.284225   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:28.284286   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:33.284735   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:33.284821   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:38.285482   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:38.285514   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:43.286397   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:43.286496   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:46.879435   12861 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0318 04:04:46.879778   12861 machine.go:94] provisionDockerMachine start ...
	I0318 04:04:46.879842   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:46.880056   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:46.880063   12861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:04:46.952214   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 04:04:46.952234   12861 buildroot.go:166] provisioning hostname "stopped-upgrade-484000"
	I0318 04:04:46.952311   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:46.952479   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:46.952489   12861 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-484000 && echo "stopped-upgrade-484000" | sudo tee /etc/hostname
	I0318 04:04:47.026970   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-484000
	
	I0318 04:04:47.027040   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.027170   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.027181   12861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-484000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-484000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-484000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:04:47.095084   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:04:47.095099   12861 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18431-10628/.minikube CaCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18431-10628/.minikube}
	I0318 04:04:47.095109   12861 buildroot.go:174] setting up certificates
	I0318 04:04:47.095118   12861 provision.go:84] configureAuth start
	I0318 04:04:47.095124   12861 provision.go:143] copyHostCerts
	I0318 04:04:47.095220   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem, removing ...
	I0318 04:04:47.095229   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem
	I0318 04:04:47.095352   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem (1078 bytes)
	I0318 04:04:47.095558   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem, removing ...
	I0318 04:04:47.095563   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem
	I0318 04:04:47.095628   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem (1123 bytes)
	I0318 04:04:47.095745   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem, removing ...
	I0318 04:04:47.095749   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem
	I0318 04:04:47.095809   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem (1675 bytes)
	I0318 04:04:47.095955   12861 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-484000 san=[127.0.0.1 localhost minikube stopped-upgrade-484000]
	I0318 04:04:47.152039   12861 provision.go:177] copyRemoteCerts
	I0318 04:04:47.152072   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:04:47.152079   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.188814   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 04:04:47.195439   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:04:47.202323   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 04:04:47.209916   12861 provision.go:87] duration metric: took 114.795292ms to configureAuth
	I0318 04:04:47.209925   12861 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:04:47.210035   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:04:47.210068   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.210155   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.210159   12861 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:04:47.275220   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:04:47.275231   12861 buildroot.go:70] root file system type: tmpfs
	I0318 04:04:47.275282   12861 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:04:47.275337   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.275443   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.275478   12861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:04:47.343434   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:04:47.343490   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.343595   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.343604   12861 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:04:47.709066   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 04:04:47.709080   12861 machine.go:97] duration metric: took 829.313833ms to provisionDockerMachine
	I0318 04:04:47.709086   12861 start.go:293] postStartSetup for "stopped-upgrade-484000" (driver="qemu2")
	I0318 04:04:47.709094   12861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:04:47.709155   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:04:47.709164   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.745314   12861 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:04:47.746620   12861 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:04:47.746628   12861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/addons for local assets ...
	I0318 04:04:47.746707   12861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/files for local assets ...
	I0318 04:04:47.746818   12861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem -> 110352.pem in /etc/ssl/certs
	I0318 04:04:47.746935   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:04:47.749878   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:04:47.756865   12861 start.go:296] duration metric: took 47.774667ms for postStartSetup
	I0318 04:04:47.756881   12861 fix.go:56] duration metric: took 21.123598375s for fixHost
	I0318 04:04:47.756915   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.757012   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.757016   12861 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 04:04:47.825634   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710759888.083216504
	
	I0318 04:04:47.825645   12861 fix.go:216] guest clock: 1710759888.083216504
	I0318 04:04:47.825650   12861 fix.go:229] Guest: 2024-03-18 04:04:48.083216504 -0700 PDT Remote: 2024-03-18 04:04:47.756882 -0700 PDT m=+21.245155209 (delta=326.334504ms)
	I0318 04:04:47.825661   12861 fix.go:200] guest clock delta is within tolerance: 326.334504ms
	I0318 04:04:47.825664   12861 start.go:83] releasing machines lock for "stopped-upgrade-484000", held for 21.192392625s
	I0318 04:04:47.825738   12861 ssh_runner.go:195] Run: cat /version.json
	I0318 04:04:47.825747   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.825764   12861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:04:47.825787   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	W0318 04:04:47.826405   12861 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52306: connect: connection refused
	I0318 04:04:47.826429   12861 retry.go:31] will retry after 297.432327ms: dial tcp [::1]:52306: connect: connection refused
	W0318 04:04:47.862298   12861 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:04:47.862360   12861 ssh_runner.go:195] Run: systemctl --version
	I0318 04:04:47.864225   12861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:04:47.865673   12861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:04:47.865706   12861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:04:47.868933   12861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:04:47.873217   12861 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:04:47.873226   12861 start.go:494] detecting cgroup driver to use...
	I0318 04:04:47.873299   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:04:47.880736   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:04:47.883785   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:04:47.886651   12861 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:04:47.886677   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:04:47.890009   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:04:47.893568   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:04:47.896967   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:04:47.900463   12861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:04:47.903198   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:04:47.906455   12861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:04:47.909557   12861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:04:47.912196   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:47.974123   12861 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:04:47.980442   12861 start.go:494] detecting cgroup driver to use...
	I0318 04:04:47.980513   12861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:04:47.989276   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:04:47.994234   12861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:04:48.000032   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:04:48.004264   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:04:48.008656   12861 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 04:04:48.060444   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:04:48.065257   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:04:48.070613   12861 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:04:48.071864   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:04:48.074628   12861 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:04:48.079635   12861 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:04:48.160375   12861 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:04:48.248879   12861 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:04:48.248959   12861 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:04:48.254999   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:48.333077   12861 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:49.487322   12861 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1542545s)
	I0318 04:04:49.487404   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:04:49.492004   12861 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 04:04:49.497783   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:49.502153   12861 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:04:49.566209   12861 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:04:49.622006   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:49.706302   12861 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:04:49.711830   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:49.716190   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:49.793297   12861 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:04:49.833986   12861 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:04:49.834059   12861 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:04:49.837137   12861 start.go:562] Will wait 60s for crictl version
	I0318 04:04:49.837196   12861 ssh_runner.go:195] Run: which crictl
	I0318 04:04:49.838561   12861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:04:49.854124   12861 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:04:49.854193   12861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:49.871470   12861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:49.893285   12861 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:04:49.893357   12861 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:04:49.894584   12861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:04:49.898446   12861 kubeadm.go:877] updating cluster {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:04:49.898495   12861 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:04:49.898542   12861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:49.909145   12861 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:49.909152   12861 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:49.909193   12861 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:49.912141   12861 ssh_runner.go:195] Run: which lz4
	I0318 04:04:49.913288   12861 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 04:04:49.914408   12861 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:04:49.914419   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:04:50.629072   12861 docker.go:649] duration metric: took 715.835458ms to copy over tarball
	I0318 04:04:50.629134   12861 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:04:48.288351   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:48.288369   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:51.820973   12861 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.191854875s)
	I0318 04:04:51.820987   12861 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:04:51.836851   12861 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:51.839982   12861 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:04:51.845054   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:51.906879   12861 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:53.514337   12861 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.607480334s)
	I0318 04:04:53.514434   12861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:53.526771   12861 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:53.526784   12861 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:53.526789   12861 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:04:53.533785   12861 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:53.533859   12861 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:04:53.534040   12861 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:53.534064   12861 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:53.534100   12861 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:53.534105   12861 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:53.534657   12861 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:53.534728   12861 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:53.542161   12861 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:53.543990   12861 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:53.544004   12861 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:04:53.544037   12861 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:53.544048   12861 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:53.544063   12861 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:53.544093   12861 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:53.544102   12861 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.452179   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.489550   12861 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:04:55.489601   12861 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.489696   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.509813   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.509851   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:04:55.524569   12861 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:04:55.524593   12861 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.524656   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.536688   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:04:55.552465   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:04:55.564275   12861 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:04:55.564299   12861 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:04:55.564350   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:04:55.576322   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.577579   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:04:55.577664   12861 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	W0318 04:04:55.584955   12861 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:55.585078   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.587453   12861 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:04:55.587487   12861 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.587512   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:04:55.587533   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:04:55.587618   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.592711   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.601655   12861 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:04:55.601682   12861 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.601732   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.603656   12861 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:04:55.603666   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:04:55.613512   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:04:55.621476   12861 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:04:55.621501   12861 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.621552   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:04:55.621562   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.621654   12861 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:55.646474   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.653014   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 04:04:55.653094   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:04:55.653161   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:04:55.653178   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:04:55.665844   12861 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:04:55.665867   12861 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.665923   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.693886   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:04:55.695272   12861 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:55.695282   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:04:55.736806   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0318 04:04:56.062269   12861 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:56.062913   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.098797   12861 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:04:56.098841   12861 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.098947   12861 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.127690   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:04:56.127830   12861 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:56.129750   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:04:56.129764   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:04:56.155550   12861 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:56.155566   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:04:56.391542   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:04:56.391591   12861 cache_images.go:92] duration metric: took 2.86486475s to LoadCachedImages
	W0318 04:04:56.391639   12861 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 04:04:56.391645   12861 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:04:56.391696   12861 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-484000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:04:56.391773   12861 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:04:56.405156   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:04:56.405169   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:04:56.405174   12861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:04:56.405183   12861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-484000 NodeName:stopped-upgrade-484000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:04:56.405253   12861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-484000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:04:56.405314   12861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:04:56.408550   12861 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:04:56.408583   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:04:56.411077   12861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:04:56.415920   12861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:04:56.420567   12861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:04:56.425991   12861 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:04:56.427311   12861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:04:56.430688   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:56.516369   12861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:04:56.521999   12861 certs.go:68] Setting up /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000 for IP: 10.0.2.15
	I0318 04:04:56.522007   12861 certs.go:194] generating shared ca certs ...
	I0318 04:04:56.522016   12861 certs.go:226] acquiring lock for ca certs: {Name:mk88e7d2813085015a17ae8f4e4d52088bf47170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.522173   12861 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key
	I0318 04:04:56.522231   12861 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key
	I0318 04:04:56.522238   12861 certs.go:256] generating profile certs ...
	I0318 04:04:56.522316   12861 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key
	I0318 04:04:56.522334   12861 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968
	I0318 04:04:56.522348   12861 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:04:53.289894   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:53.289934   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:56.647490   12861 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 ...
	I0318 04:04:56.647502   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968: {Name:mk9602298f28832d9adc36f9a82559b6f11a161e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.647760   12861 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 ...
	I0318 04:04:56.647766   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968: {Name:mk41fb4ce1b695471d92aa2b46e2d02fc820d357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.647897   12861 certs.go:381] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt
	I0318 04:04:56.648024   12861 certs.go:385] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key
	I0318 04:04:56.648158   12861 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.key
	I0318 04:04:56.648272   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem (1338 bytes)
	W0318 04:04:56.648302   12861 certs.go:480] ignoring /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035_empty.pem, impossibly tiny 0 bytes
	I0318 04:04:56.648308   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 04:04:56.648335   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem (1078 bytes)
	I0318 04:04:56.648352   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:04:56.648368   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem (1675 bytes)
	I0318 04:04:56.648403   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:04:56.648749   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:04:56.655461   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 04:04:56.662254   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:04:56.669786   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 04:04:56.676588   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:04:56.683094   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 04:04:56.690083   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:04:56.697363   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 04:04:56.704432   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:04:56.711755   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem --> /usr/share/ca-certificates/11035.pem (1338 bytes)
	I0318 04:04:56.718409   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /usr/share/ca-certificates/110352.pem (1708 bytes)
	I0318 04:04:56.725534   12861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:04:56.730689   12861 ssh_runner.go:195] Run: openssl version
	I0318 04:04:56.732922   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:04:56.735742   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.737033   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.737057   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.738659   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:04:56.741948   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11035.pem && ln -fs /usr/share/ca-certificates/11035.pem /etc/ssl/certs/11035.pem"
	I0318 04:04:56.745003   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.746440   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:51 /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.746462   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.748189   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11035.pem /etc/ssl/certs/51391683.0"
	I0318 04:04:56.750920   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110352.pem && ln -fs /usr/share/ca-certificates/110352.pem /etc/ssl/certs/110352.pem"
	I0318 04:04:56.754243   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.755626   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:51 /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.755646   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.757264   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110352.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:04:56.760340   12861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:04:56.761568   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:04:56.764075   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:04:56.765992   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:04:56.768297   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:04:56.770188   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:04:56.771958   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:04:56.773670   12861 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:56.773743   12861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:56.783768   12861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:04:56.786743   12861 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:04:56.786749   12861 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:04:56.786752   12861 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:04:56.786777   12861 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:04:56.790216   12861 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:56.790524   12861 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-484000" does not appear in /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:04:56.790621   12861 kubeconfig.go:62] /Users/jenkins/minikube-integration/18431-10628/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-484000" cluster setting kubeconfig missing "stopped-upgrade-484000" context setting]
	I0318 04:04:56.790823   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.791259   12861 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10256ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:04:56.791568   12861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:04:56.794351   12861 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-484000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:04:56.794358   12861 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:04:56.794400   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:56.805497   12861 docker.go:483] Stopping containers: [85734f46676a cce46acfb903 4f03456025e7 f398fedc3f15 2e0bd8ab6807 3d030fdab3e2 e7d513883fbd 165f5ca45430]
	I0318 04:04:56.805567   12861 ssh_runner.go:195] Run: docker stop 85734f46676a cce46acfb903 4f03456025e7 f398fedc3f15 2e0bd8ab6807 3d030fdab3e2 e7d513883fbd 165f5ca45430
	I0318 04:04:56.816514   12861 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:04:56.822390   12861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:04:56.825693   12861 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:04:56.825702   12861 kubeadm.go:156] found existing configuration files:
	
	I0318 04:04:56.825729   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0318 04:04:56.828682   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:04:56.828708   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:04:56.831142   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0318 04:04:56.834025   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:04:56.834054   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:04:56.837210   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0318 04:04:56.840004   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:04:56.840042   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:04:56.842685   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0318 04:04:56.845607   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:04:56.845631   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:04:56.848698   12861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:04:56.851623   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:56.872784   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.338791   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.450581   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.472689   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.493618   12861 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:04:57.493706   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:57.995928   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:58.495752   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:58.501891   12861 api_server.go:72] duration metric: took 1.008300958s to wait for apiserver process to appear ...
	I0318 04:04:58.501906   12861 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:04:58.501923   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:04:58.291875   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:04:58.291900   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:03.503900   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:03.503928   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:03.293946   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:03.293972   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:08.504089   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:08.504151   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:08.296045   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:08.296088   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:13.505028   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:13.505074   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:13.297877   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:13.297896   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:18.505540   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:18.505558   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:18.299954   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:18.300166   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:18.316215   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:18.316293   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:18.328576   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:18.328650   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:18.338897   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:18.338976   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:18.349432   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:18.349522   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:18.359655   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:18.359715   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:18.370425   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:18.370514   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:18.380507   12818 logs.go:276] 0 containers: []
	W0318 04:05:18.380520   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:18.380579   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:18.391124   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:18.391142   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:18.391147   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:18.406465   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:18.406477   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:18.433149   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:18.433160   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:18.446786   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:18.446797   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:18.458221   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:18.458234   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:18.469618   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:18.469628   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:18.510326   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:18.510332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:18.521560   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:18.521569   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:18.538473   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:18.538483   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:18.554889   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:18.554898   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:18.559588   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:18.559596   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:18.656794   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:18.656803   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:18.672267   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:18.672280   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:18.686088   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:18.686101   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:18.697906   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:18.697921   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:18.739698   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:18.739708   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:18.753553   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:18.753566   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:21.268589   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:23.506181   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:23.506232   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:26.270898   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:26.271320   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:26.310624   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:26.310768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:26.331207   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:26.331337   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:26.346234   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:26.346314   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:26.362180   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:26.362263   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:26.372952   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:26.373015   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:26.383893   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:26.383974   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:26.394427   12818 logs.go:276] 0 containers: []
	W0318 04:05:26.394438   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:26.394511   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:26.405309   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:26.405326   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:26.405331   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:26.410121   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:26.410129   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:26.448288   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:26.448297   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:26.463630   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:26.463644   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:26.477927   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:26.477939   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:26.515965   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:26.515979   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:26.531133   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:26.531146   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:26.547171   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:26.547182   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:26.559056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:26.559071   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:26.574231   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:26.574241   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:26.585620   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:26.585632   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:26.599861   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:26.599871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:26.611320   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:26.611333   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:26.622783   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:26.622799   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:26.643082   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:26.643095   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:26.684020   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:26.684029   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:26.697970   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:26.697983   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:28.507192   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:28.507247   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:29.225264   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:33.508514   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:33.508560   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:34.227548   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:34.227752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:34.244801   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:34.244891   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:34.258059   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:34.258123   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:34.268927   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:34.268997   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:34.279433   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:34.279513   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:34.290355   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:34.290416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:34.305707   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:34.305778   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:34.316027   12818 logs.go:276] 0 containers: []
	W0318 04:05:34.316042   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:34.316094   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:34.327546   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:34.327573   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:34.327578   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:34.339053   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:34.339066   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:34.364411   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:34.364420   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:34.403410   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:34.403418   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:34.440220   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:34.440242   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:34.456161   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:34.456185   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:34.467604   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:34.467613   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:34.481918   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:34.481929   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:34.496229   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:34.496247   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:34.508279   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:34.508290   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:34.526983   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:34.526995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:34.531728   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:34.531734   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:34.573360   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:34.573371   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:34.588911   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:34.588922   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:34.605511   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:34.605525   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:34.616981   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:34.616993   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:34.631853   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:34.631871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:37.152016   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:38.510415   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:38.510500   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:42.153271   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:42.153488   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:42.170783   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:42.170892   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:42.184253   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:42.184341   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:42.195556   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:42.195641   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:42.206264   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:42.206336   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:42.217284   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:42.217354   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:42.228381   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:42.228443   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:42.238809   12818 logs.go:276] 0 containers: []
	W0318 04:05:42.238823   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:42.238893   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:42.249165   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:42.249183   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:42.249188   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:42.263141   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:42.263156   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:42.280389   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:42.280400   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:42.292498   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:42.292509   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:42.328376   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:42.328386   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:42.364869   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:42.364880   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:42.376162   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:42.376172   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:42.402135   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:42.402146   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:42.416908   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:42.416919   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:42.432056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:42.432067   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:42.449090   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:42.449101   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:42.460636   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:42.460646   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:42.474542   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:42.474552   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:42.488258   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:42.488268   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:42.499176   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:42.499186   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:42.510765   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:42.510777   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:42.549304   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:42.549314   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:43.512894   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:43.512929   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:45.055739   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:48.514628   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:48.514786   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:50.058003   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:50.058306   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:50.083041   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:50.083160   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:50.099705   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:50.099783   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:50.112809   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:50.112884   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:50.124201   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:50.124274   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:50.135802   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:50.135873   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:50.150614   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:50.150682   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:50.164320   12818 logs.go:276] 0 containers: []
	W0318 04:05:50.164332   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:50.164399   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:50.174250   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:50.174270   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:50.174275   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:50.189062   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:50.189074   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:50.213903   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:50.213910   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:50.227987   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:50.227998   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:50.239551   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:50.239564   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:50.278414   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:50.278430   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:50.282724   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:50.282730   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:50.296579   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:50.296590   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:50.311454   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:50.311465   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:50.323350   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:50.323360   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:50.335343   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:50.335357   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:50.360189   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:50.360201   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:50.374876   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:50.374888   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:50.386183   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:50.386196   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:50.423319   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:50.423332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:50.438930   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:50.438944   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:50.475346   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:50.475357   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:52.989749   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:53.517115   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:53.517165   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:57.992071   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:57.992258   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:58.009544   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:05:58.009638   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:58.023576   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:05:58.023665   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:58.035064   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:05:58.035133   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:58.045607   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:05:58.045669   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:58.056099   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:05:58.056165   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:58.068038   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:05:58.068111   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:58.078341   12818 logs.go:276] 0 containers: []
	W0318 04:05:58.078352   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:58.078409   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:58.089002   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:05:58.089018   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:05:58.089023   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:05:58.103255   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:05:58.103266   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:05:58.117454   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:05:58.117465   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:05:58.132173   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:05:58.132184   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:05:58.144152   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:58.144162   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:58.168882   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:05:58.168900   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:05:58.184003   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:05:58.184014   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:05:58.519248   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:58.519375   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:58.535042   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:05:58.535117   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:58.547162   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:05:58.547236   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:58.567596   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:05:58.567672   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:58.582794   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:05:58.582864   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:58.593176   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:05:58.593242   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:58.603058   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:05:58.603140   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:58.613206   12861 logs.go:276] 0 containers: []
	W0318 04:05:58.613219   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:58.613280   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:58.624291   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:05:58.624310   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:58.624315   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:58.651105   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:05:58.651116   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:05:58.662480   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:05:58.662493   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:05:58.677332   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:05:58.677347   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:05:58.696273   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:05:58.696286   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:05:58.707714   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:58.707724   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:58.746285   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:58.746293   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:58.860732   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:05:58.860746   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:05:58.875382   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:05:58.875393   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:05:58.893172   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:05:58.893184   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:05:58.905289   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:05:58.905300   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:05:58.916727   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:05:58.916739   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:58.928812   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:58.928824   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:58.933452   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:05:58.933459   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:05:58.947244   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:05:58.947256   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:05:58.963701   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:05:58.963713   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:05:58.976560   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:05:58.976573   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:01.508359   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:58.195746   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:05:58.195759   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:05:58.207636   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:05:58.207649   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:05:58.219556   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:05:58.219567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:05:58.256886   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:58.256896   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:58.261701   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:58.261711   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:58.301952   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:05:58.301965   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:05:58.313785   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:05:58.313797   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:05:58.331433   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:05:58.331446   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:05:58.347513   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:05:58.347526   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:58.359457   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:58.359468   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:00.896464   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:06.510071   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:06.510236   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:06.525358   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:06.525445   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:06.538848   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:06.538926   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:05.898920   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:05.899345   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:05.938551   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:05.938696   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:05.959502   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:05.959612   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:05.975013   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:05.975094   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:05.987621   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:05.987704   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:05.998189   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:05.998258   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:06.009030   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:06.009096   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:06.019059   12818 logs.go:276] 0 containers: []
	W0318 04:06:06.019077   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:06.019138   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:06.029900   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:06.029921   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:06.029928   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:06.034989   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:06.034997   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:06.046709   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:06.046720   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:06.058419   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:06.058431   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:06.073013   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:06.073027   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:06.090569   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:06.090579   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:06.102727   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:06.102739   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:06.141537   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:06.141547   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:06.177539   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:06.177552   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:06.191728   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:06.191741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:06.205617   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:06.205629   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:06.220332   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:06.220343   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:06.245674   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:06.245684   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:06.257694   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:06.257705   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:06.303773   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:06.303783   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:06.318416   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:06.318428   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:06.336828   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:06.336839   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:06.549073   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:06.549143   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:06.559344   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:06.559431   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:06.569542   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:06.569612   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:06.580590   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:06.580662   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:06.591043   12861 logs.go:276] 0 containers: []
	W0318 04:06:06.591056   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:06.591112   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:06.601496   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:06.601514   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:06.601520   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:06.605950   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:06.605957   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:06.619514   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:06.619530   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:06.630944   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:06.630957   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:06.656449   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:06.656465   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:06.668298   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:06.668308   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:06.680486   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:06.680497   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:06.692535   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:06.692549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:06.717389   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:06.717398   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:06.731039   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:06.731051   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:06.745662   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:06.745672   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:06.763522   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:06.763536   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:06.801991   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:06.802003   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:06.841377   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:06.841389   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:06.855256   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:06.855267   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:06.867079   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:06.867093   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:06.884056   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:06.884068   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:09.397071   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:08.849942   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:14.399265   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:14.399419   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:14.411862   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:14.411931   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:14.422262   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:14.422324   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:14.432425   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:14.432513   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:14.442442   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:14.442518   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:14.452761   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:14.452831   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:14.464933   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:14.464999   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:14.475381   12861 logs.go:276] 0 containers: []
	W0318 04:06:14.475392   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:14.475448   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:14.485494   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:14.485524   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:14.485530   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:14.497754   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:14.497764   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:14.508977   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:14.508988   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:14.534720   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:14.534731   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:14.571516   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:14.571525   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:14.575189   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:14.575198   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:14.610375   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:14.610386   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:14.632126   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:14.632137   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:14.644155   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:14.644168   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:14.659161   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:14.659175   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:14.670681   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:14.670691   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:14.686764   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:14.686777   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:14.700866   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:14.700879   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:14.715311   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:14.715325   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:14.726399   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:14.726410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:14.750957   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:14.750968   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:14.762773   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:14.762785   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:13.852005   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:13.852254   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:13.872935   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:13.873051   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:13.887681   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:13.887760   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:13.899483   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:13.899566   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:13.910598   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:13.910674   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:13.921588   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:13.921661   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:13.932079   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:13.932149   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:13.947490   12818 logs.go:276] 0 containers: []
	W0318 04:06:13.947502   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:13.947564   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:13.958052   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:13.958068   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:13.958073   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:13.995908   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:13.995920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:14.010907   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:14.010918   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:14.022280   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:14.022292   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:14.033704   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:14.033716   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:14.045972   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:14.045986   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:14.060511   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:14.060521   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:14.072865   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:14.072878   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:14.091067   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:14.091079   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:14.115293   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:14.115300   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:14.150705   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:14.150717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:14.162992   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:14.163005   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:14.174948   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:14.174960   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:14.214485   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:14.214495   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:14.218788   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:14.218793   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:14.233139   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:14.233152   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:14.248109   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:14.248119   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:16.765401   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:17.276675   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:21.767567   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:21.767709   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:21.779728   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:21.779796   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:21.790679   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:21.790756   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:21.801810   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:21.801878   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:21.817100   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:21.817201   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:21.835699   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:21.835769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:21.846419   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:21.846487   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:21.856440   12818 logs.go:276] 0 containers: []
	W0318 04:06:21.856451   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:21.856510   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:21.867099   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:21.867115   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:21.867120   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:21.901089   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:21.901103   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:21.915176   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:21.915190   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:21.951571   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:21.951589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:21.963321   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:21.963333   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:21.978605   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:21.978617   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:21.991922   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:21.991933   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:22.030682   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:22.030690   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:22.044418   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:22.044428   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:22.059043   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:22.059055   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:22.071376   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:22.071386   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:22.086650   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:22.086663   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:22.098580   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:22.098591   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:22.119428   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:22.119438   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:22.124228   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:22.124239   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:22.135969   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:22.135979   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:22.161732   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:22.161743   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:22.278873   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:22.278997   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:22.292176   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:22.292255   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:22.304130   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:22.304202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:22.313871   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:22.313932   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:22.324545   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:22.324606   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:22.335250   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:22.335323   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:22.347323   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:22.347382   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:22.365974   12861 logs.go:276] 0 containers: []
	W0318 04:06:22.365986   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:22.366044   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:22.376297   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:22.376316   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:22.376322   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:22.416779   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:22.416789   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:22.421387   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:22.421394   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:22.432475   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:22.432486   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:22.444165   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:22.444176   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:22.461992   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:22.462003   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:22.474431   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:22.474441   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:22.489028   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:22.489038   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:22.512438   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:22.512449   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:22.523767   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:22.523779   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:22.535116   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:22.535129   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:22.546714   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:22.546728   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:22.558190   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:22.558202   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:22.597282   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:22.597295   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:22.611599   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:22.611613   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:22.625901   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:22.625911   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:22.646165   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:22.646177   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:25.171851   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:24.675528   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:30.174017   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:30.174139   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:30.185058   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:30.185145   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:30.195981   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:30.196056   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:30.206540   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:30.206613   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:30.216655   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:30.216736   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:30.227189   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:30.227271   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:30.237369   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:30.237436   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:30.248615   12861 logs.go:276] 0 containers: []
	W0318 04:06:30.248630   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:30.248693   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:30.262555   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:30.262594   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:30.262600   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:30.274319   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:30.274330   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:30.278372   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:30.278382   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:30.290198   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:30.290211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:30.301750   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:30.301761   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:30.326433   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:30.326442   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:30.337980   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:30.337990   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:30.376516   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:30.376527   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:30.390046   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:30.390057   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:30.414077   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:30.414088   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:30.428380   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:30.428391   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:30.440572   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:30.440582   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:30.452491   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:30.452504   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:30.488404   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:30.488415   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:30.502542   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:30.502553   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:30.516931   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:30.516944   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:30.535893   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:30.535902   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:29.677854   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:29.678261   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:29.714705   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:29.714836   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:29.733727   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:29.733823   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:29.756255   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:29.756335   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:29.768311   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:29.768387   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:29.779452   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:29.779519   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:29.790253   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:29.790326   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:29.815678   12818 logs.go:276] 0 containers: []
	W0318 04:06:29.815691   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:29.815752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:29.825899   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:29.825915   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:29.825920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:29.840889   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:29.840899   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:29.852427   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:29.852442   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:29.864576   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:29.864590   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:29.876555   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:29.876567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:29.891014   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:29.891024   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:29.914645   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:29.914654   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:29.918734   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:29.918743   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:29.938267   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:29.938282   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:29.949988   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:29.949999   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:29.964599   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:29.964609   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:29.982547   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:29.982559   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:30.021758   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:30.021768   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:30.056748   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:30.056760   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:30.095406   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:30.095419   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:30.109643   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:30.109654   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:30.121069   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:30.121080   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:32.634766   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:33.050365   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:37.637000   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:37.637281   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:37.671412   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:37.671555   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:37.691701   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:37.691810   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:37.706740   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:37.706827   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:37.721011   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:37.721237   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:37.732812   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:37.732873   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:37.750748   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:37.750808   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:37.761505   12818 logs.go:276] 0 containers: []
	W0318 04:06:37.761518   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:37.761567   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:37.772318   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:37.772334   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:37.772339   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:37.784227   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:37.784238   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:37.812883   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:37.812898   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:37.848494   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:37.848508   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:37.863771   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:37.863782   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:37.902637   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:37.902649   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:37.920163   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:37.920178   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:37.935612   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:37.935622   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:37.947383   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:37.947394   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:37.951696   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:37.951707   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:37.965566   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:37.965576   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:37.977819   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:37.977833   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:38.001561   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:38.001567   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:38.041120   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:38.041129   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:38.056163   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:38.056174   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:38.071482   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:38.071498   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:38.083996   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:38.084012   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:38.051252   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:38.051351   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:38.066898   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:38.066977   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:38.078224   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:38.078304   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:38.091188   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:38.091261   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:38.102387   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:38.102457   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:38.112576   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:38.112666   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:38.123545   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:38.123616   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:38.133581   12861 logs.go:276] 0 containers: []
	W0318 04:06:38.133594   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:38.133652   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:38.144407   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:38.144426   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:38.144431   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:38.156482   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:38.156494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:38.173277   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:38.173292   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:38.185402   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:38.185414   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:38.197550   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:38.197560   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:38.211718   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:38.211728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:38.226064   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:38.226074   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:38.245492   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:38.245502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:38.258087   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:38.258098   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:38.262101   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:38.262111   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:38.298490   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:38.298502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:38.309871   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:38.309881   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:38.321073   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:38.321082   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:38.346347   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:38.346358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:38.383199   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:38.383212   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:38.402951   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:38.402960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:38.427687   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:38.427700   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:40.943746   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:40.598117   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:45.945977   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:45.946100   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:45.958778   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:45.958855   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:45.969756   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:45.969834   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:45.983014   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:45.983099   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:45.994947   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:45.995049   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:46.008558   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:46.008631   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:46.020554   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:46.020632   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:46.031545   12861 logs.go:276] 0 containers: []
	W0318 04:06:46.031559   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:46.031617   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:46.042569   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:46.042587   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:46.042593   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:46.056204   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:46.056214   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:46.080718   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:46.080728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:46.095606   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:46.095617   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:46.107580   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:46.107592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:46.122167   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:46.122178   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:46.138957   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:46.138972   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:46.151274   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:46.151284   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:46.156248   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:46.156254   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:46.179823   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:46.179831   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:46.191995   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:46.192006   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:46.213472   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:46.213484   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:46.224781   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:46.224793   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:46.261778   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:46.261791   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:46.275727   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:46.275738   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:46.313627   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:46.313637   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:46.325820   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:46.325831   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:45.600426   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:45.600699   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:45.620759   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:45.620858   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:45.635535   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:45.635611   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:45.647897   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:45.647970   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:45.658201   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:45.658265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:45.668297   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:45.668354   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:45.678788   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:45.678856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:45.688687   12818 logs.go:276] 0 containers: []
	W0318 04:06:45.688699   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:45.688752   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:45.699593   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:45.699615   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:45.699621   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:45.704272   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:45.704280   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:45.743534   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:45.743544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:45.754846   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:45.754856   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:45.774302   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:45.774313   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:45.786246   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:45.786257   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:45.800699   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:45.800709   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:45.821449   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:45.821459   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:45.846389   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:45.846401   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:45.888101   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:45.888111   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:45.924946   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:45.924958   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:45.937377   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:45.937390   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:45.949305   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:45.949318   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:45.965883   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:45.965894   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:45.981085   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:45.981097   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:45.995698   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:45.995706   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:46.008408   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:46.008422   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:48.844866   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:48.522877   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:53.847131   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:53.847244   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:53.858934   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:53.859012   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:53.870317   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:53.870394   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:53.881917   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:53.881987   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:53.898517   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:53.898594   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:53.909608   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:53.909688   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:53.921043   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:53.921121   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:53.932843   12861 logs.go:276] 0 containers: []
	W0318 04:06:53.932857   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:53.932927   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:53.944118   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:53.944139   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:53.944144   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:53.960073   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:53.960085   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:53.979755   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:53.979765   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:53.991746   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:53.991756   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:54.028635   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:54.028653   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:54.033056   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:54.033062   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:54.046359   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:54.046369   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:54.069833   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:54.069845   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:54.080988   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:54.081000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:54.104625   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:54.104633   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:54.117623   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:54.117636   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:54.151614   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:54.151626   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:54.165510   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:54.165520   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:54.176970   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:54.176981   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:54.188593   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:54.188604   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:54.202741   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:54.202751   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:54.213930   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:54.213943   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:53.525315   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:53.525548   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:53.543303   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:06:53.543392   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:53.558080   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:06:53.558151   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:53.569678   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:06:53.569753   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:53.580711   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:06:53.580792   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:53.591191   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:06:53.591265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:53.601838   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:06:53.601918   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:53.618696   12818 logs.go:276] 0 containers: []
	W0318 04:06:53.618707   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:53.618769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:53.628804   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:06:53.628824   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:53.628830   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:53.633155   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:06:53.633162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:06:53.671524   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:06:53.671535   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:06:53.686237   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:06:53.686247   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:06:53.701790   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:53.701801   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:53.725633   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:53.725644   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:53.764452   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:06:53.764461   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:06:53.778698   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:06:53.778708   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:06:53.790656   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:06:53.790668   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:06:53.802576   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:06:53.802587   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:06:53.821345   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:06:53.821355   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:06:53.835129   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:06:53.835139   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:06:53.848932   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:06:53.848939   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:06:53.862730   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:06:53.862741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:06:53.875308   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:06:53.875322   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:06:53.903406   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:06:53.903418   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:53.922053   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:53.922062   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:56.469528   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:56.726187   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:01.472245   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:01.472734   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:01.512039   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:01.512183   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:01.533202   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:01.533300   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:01.548794   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:01.548872   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:01.560938   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:01.561014   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:01.574475   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:01.574553   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:01.585658   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:01.585729   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:01.596210   12818 logs.go:276] 0 containers: []
	W0318 04:07:01.596224   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:01.596290   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:01.607243   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:01.607259   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:01.607264   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:01.612076   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:01.612084   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:01.623471   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:01.623486   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:01.647634   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:01.647642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:01.668005   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:01.668016   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:01.683355   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:01.683365   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:01.695964   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:01.695974   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:01.714056   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:01.714069   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:01.729608   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:01.729617   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:01.745816   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:01.745827   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:01.789988   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:01.789999   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:01.808820   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:01.808831   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:01.822036   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:01.822048   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:01.834528   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:01.834541   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:01.847468   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:01.847483   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:01.908409   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:01.908419   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:01.923009   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:01.923020   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:01.726705   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:01.726788   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:01.743189   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:01.743264   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:01.762593   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:01.762677   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:01.776797   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:01.776873   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:01.788570   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:01.788644   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:01.799844   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:01.799920   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:01.811935   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:01.812011   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:01.824053   12861 logs.go:276] 0 containers: []
	W0318 04:07:01.824064   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:01.824132   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:01.835228   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:01.835244   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:01.835249   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:01.839748   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:01.839760   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:01.855551   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:01.855562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:01.867536   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:01.867549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:01.884510   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:01.884527   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:01.897012   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:01.897026   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:01.937085   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:01.937105   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:01.968211   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:01.968223   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:01.983377   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:01.983389   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:02.000926   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:02.000939   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:02.012475   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:02.012487   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:02.024175   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:02.024187   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:02.060157   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:02.060170   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:02.078928   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:02.078939   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:02.093772   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:02.093783   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:02.105124   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:02.105134   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:02.129525   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:02.129534   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:04.644651   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:04.472079   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:09.646810   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:09.646897   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:09.658315   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:09.658388   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:09.669629   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:09.669716   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:09.680934   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:09.681016   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:09.691949   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:09.692023   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:09.703402   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:09.703476   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:09.715631   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:09.715700   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:09.726836   12861 logs.go:276] 0 containers: []
	W0318 04:07:09.726848   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:09.726909   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:09.738119   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:09.738141   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:09.738147   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:09.750785   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:09.750799   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:09.755388   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:09.755398   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:09.791696   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:09.791709   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:09.827876   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:09.827889   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:09.848153   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:09.848166   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:09.863831   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:09.863845   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:09.877407   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:09.877419   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:09.889480   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:09.889492   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:09.929280   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:09.929292   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:09.943683   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:09.943693   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:09.958451   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:09.958464   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:09.970547   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:09.970557   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:09.983925   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:09.983937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:09.998636   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:09.998647   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:10.010591   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:10.010601   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:10.027950   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:10.027960   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:09.474338   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:09.474780   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:09.513929   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:09.514070   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:09.535472   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:09.535598   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:09.550800   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:09.550870   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:09.563717   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:09.563794   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:09.579427   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:09.579500   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:09.590646   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:09.590718   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:09.601600   12818 logs.go:276] 0 containers: []
	W0318 04:07:09.601613   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:09.601672   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:09.612045   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:09.612064   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:09.612069   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:09.648971   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:09.648981   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:09.664686   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:09.664699   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:09.679989   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:09.680001   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:09.692197   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:09.692205   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:09.696723   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:09.696742   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:09.709200   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:09.709216   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:09.727788   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:09.727797   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:09.740578   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:09.740589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:09.753072   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:09.753083   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:09.768891   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:09.768902   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:09.796124   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:09.796138   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:09.838109   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:09.838128   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:09.853302   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:09.853315   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:09.894799   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:09.894810   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:09.907826   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:09.907837   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:09.923466   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:09.923477   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:12.437350   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:12.553025   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:17.439634   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:17.439856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:17.460026   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:17.460132   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:17.475320   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:17.475398   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:17.487487   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:17.487557   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:17.498212   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:17.498290   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:17.508806   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:17.508878   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:17.520094   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:17.520164   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:17.529992   12818 logs.go:276] 0 containers: []
	W0318 04:07:17.530002   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:17.530056   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:17.540544   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:17.540560   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:17.540565   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:17.579821   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:17.579834   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:17.600487   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:17.600495   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:17.613873   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:17.613884   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:17.627376   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:17.627388   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:17.666340   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:17.666354   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:17.682209   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:17.682221   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:17.699102   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:17.699110   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:17.711620   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:17.711632   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:17.753813   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:17.753825   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:17.768657   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:17.768670   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:17.785209   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:17.785226   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:17.798347   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:17.798358   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:17.825869   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:17.825883   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:17.830961   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:17.830968   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:17.842967   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:17.842978   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:17.863606   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:17.863614   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:17.553622   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:17.553705   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:17.564910   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:17.564980   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:17.575616   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:17.575690   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:17.587083   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:17.587149   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:17.599703   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:17.599777   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:17.611390   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:17.611476   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:17.622853   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:17.622931   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:17.633692   12861 logs.go:276] 0 containers: []
	W0318 04:07:17.633703   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:17.633765   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:17.645595   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:17.645617   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:17.645622   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:17.659249   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:17.659261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:17.674007   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:17.674020   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:17.698181   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:17.698194   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:17.711969   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:17.711978   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:17.754505   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:17.754514   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:17.793790   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:17.793804   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:17.808514   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:17.808529   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:17.829787   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:17.829798   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:17.842613   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:17.842627   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:17.861248   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:17.861261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:17.883149   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:17.883162   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:17.888336   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:17.888350   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:17.913799   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:17.913814   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:17.928931   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:17.928943   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:17.943930   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:17.943949   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:17.960109   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:17.960122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:20.473607   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:20.394018   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:25.475911   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:25.475983   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:25.488104   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:25.488181   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:25.499126   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:25.499202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:25.509884   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:25.509956   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:25.524725   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:25.524798   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:25.537035   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:25.537105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:25.549410   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:25.549486   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:25.560748   12861 logs.go:276] 0 containers: []
	W0318 04:07:25.560757   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:25.560818   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:25.572130   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:25.572150   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:25.572156   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:25.613118   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:25.613136   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:25.625465   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:25.625477   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:25.638125   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:25.638137   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:25.642579   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:25.642587   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:25.657541   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:25.657549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:25.672584   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:25.672594   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:25.687782   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:25.687793   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:25.700672   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:25.700684   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:25.713308   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:25.713318   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:25.740953   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:25.740964   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:25.756317   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:25.756326   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:25.768667   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:25.768680   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:25.794537   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:25.794557   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:25.807180   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:25.807199   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:25.844361   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:25.844377   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:25.866747   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:25.866762   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:25.395888   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:25.396188   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:25.424906   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:25.425034   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:25.442376   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:25.442486   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:25.455698   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:25.455769   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:25.467346   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:25.467421   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:25.478440   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:25.478509   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:25.490170   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:25.490233   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:25.510985   12818 logs.go:276] 0 containers: []
	W0318 04:07:25.510994   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:25.511025   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:25.522359   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:25.522379   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:25.522384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:25.534863   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:25.534880   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:25.559289   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:25.559303   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:25.598156   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:25.598170   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:25.640557   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:25.640567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:25.655657   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:25.655669   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:25.670930   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:25.670941   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:25.691564   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:25.691575   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:25.710933   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:25.710948   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:25.754365   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:25.754379   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:25.759386   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:25.759396   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:25.772440   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:25.772454   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:25.789584   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:25.789594   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:25.802150   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:25.802162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:25.820119   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:25.820137   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:25.835506   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:25.835518   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:25.847440   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:25.847455   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:28.379502   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:28.362185   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:33.381548   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:33.381629   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:33.400011   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:33.400090   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:33.411252   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:33.411326   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:33.424086   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:33.424161   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:33.436003   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:33.436081   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:33.451930   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:33.452005   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:33.463137   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:33.463212   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:33.473959   12861 logs.go:276] 0 containers: []
	W0318 04:07:33.473973   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:33.474034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:33.486015   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:33.486037   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:33.486042   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:33.526983   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:33.526999   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:33.545444   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:33.545461   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:33.558746   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:33.558759   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:33.583404   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:33.583422   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:33.588062   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:33.588076   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:33.625582   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:33.625596   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:33.640956   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:33.640968   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:33.656987   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:33.657000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:33.669626   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:33.669638   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:33.691404   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:33.691417   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:33.704464   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:33.704474   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:33.717730   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:33.717743   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:33.738943   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:33.738960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:33.752320   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:33.752335   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:33.764721   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:33.764731   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:33.790438   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:33.790448   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:36.305326   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:33.364361   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:33.364515   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:33.378401   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:33.378482   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:33.390966   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:33.391041   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:33.406474   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:33.406547   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:33.418429   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:33.418503   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:33.429790   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:33.429863   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:33.441047   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:33.441124   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:33.452374   12818 logs.go:276] 0 containers: []
	W0318 04:07:33.452388   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:33.452415   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:33.465912   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:33.465929   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:33.465934   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:33.478565   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:33.478582   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:33.483047   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:33.483057   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:33.497843   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:33.497854   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:33.516519   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:33.516532   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:33.532439   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:33.532451   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:33.574704   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:33.574717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:33.590021   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:33.590029   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:33.602315   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:33.602328   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:33.648836   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:33.648853   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:33.687100   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:33.687118   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:33.703384   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:33.703397   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:33.720937   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:33.720951   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:33.733874   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:33.733889   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:33.749087   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:33.749104   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:33.768820   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:33.768832   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:33.788502   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:33.788514   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:36.315152   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:41.307755   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:41.308007   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:41.330498   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:41.330593   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:41.345682   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:41.345763   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:41.358673   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:41.358743   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:41.370747   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:41.370824   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:41.381744   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:41.381818   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:41.393653   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:41.393721   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:41.405278   12861 logs.go:276] 0 containers: []
	W0318 04:07:41.405290   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:41.405349   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:41.424370   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:41.424386   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:41.424390   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:41.437434   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:41.437446   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:41.455385   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:41.455395   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:41.467595   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:41.467606   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:41.479695   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:41.479706   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:41.494226   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:41.494236   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:41.506122   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:41.506133   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:41.521805   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:41.521816   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:41.534306   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:41.534316   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:41.317364   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:41.317502   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:41.336098   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:41.336192   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:41.350829   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:41.350905   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:41.363518   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:41.363588   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:41.375104   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:41.375175   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:41.386907   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:41.386978   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:41.398636   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:41.398705   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:41.411473   12818 logs.go:276] 0 containers: []
	W0318 04:07:41.411485   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:41.411545   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:41.422744   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:41.422762   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:41.422767   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:41.439778   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:41.439787   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:41.467048   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:41.467059   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:41.510758   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:41.510772   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:41.516082   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:41.516091   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:41.533714   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:41.533726   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:41.546850   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:41.546861   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:41.562884   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:41.562896   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:41.575237   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:41.575245   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:41.615390   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:41.615404   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:41.628494   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:41.628506   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:41.643526   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:41.643544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:41.684100   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:41.684111   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:41.702194   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:41.702206   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:41.714375   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:41.714384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:41.733349   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:41.733362   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:41.745385   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:41.745399   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:41.574039   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:41.574061   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:41.611667   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:41.611680   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:41.638130   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:41.638146   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:41.652901   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:41.652914   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:41.657479   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:41.657491   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:41.672490   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:41.672501   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:41.697723   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:41.697742   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:41.713551   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:41.713570   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:44.231777   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:44.267873   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:49.234136   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:49.234521   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:49.264900   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:49.265034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:49.285565   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:49.285655   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:49.300893   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:49.300978   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:49.313215   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:49.313293   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:49.325180   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:49.325256   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:49.337601   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:49.337640   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:49.349068   12861 logs.go:276] 0 containers: []
	W0318 04:07:49.349076   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:49.349108   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:49.360783   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:49.360798   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:49.360803   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:49.373420   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:49.373429   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:49.414597   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:49.414609   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:49.429734   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:49.429745   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:49.444553   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:49.444568   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:49.462784   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:49.462793   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:49.467175   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:49.467186   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:49.479965   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:49.479977   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:49.492147   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:49.492158   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:49.515726   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:49.515742   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:49.528747   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:49.528758   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:49.566686   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:49.566699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:49.601139   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:49.601150   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:49.623262   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:49.623284   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:49.635552   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:49.635567   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:49.648682   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:49.648699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:49.664186   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:49.664202   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:49.269966   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:49.270086   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:49.288641   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:49.288719   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:49.302724   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:49.302782   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:49.314147   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:49.314185   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:49.325690   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:49.325729   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:49.336886   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:49.336964   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:49.348289   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:49.348360   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:49.360198   12818 logs.go:276] 0 containers: []
	W0318 04:07:49.360212   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:49.360278   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:49.372282   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:49.372303   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:49.372309   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:49.377139   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:49.377148   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:49.419904   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:49.419915   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:49.461799   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:49.461819   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:49.477214   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:49.477228   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:49.493198   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:49.493206   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:49.505862   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:49.505874   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:49.518842   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:49.518855   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:49.531727   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:49.531737   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:49.547533   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:49.547544   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:49.560399   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:49.560412   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:49.573134   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:49.573145   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:49.596943   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:49.596958   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:49.638619   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:49.638631   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:49.653508   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:49.653524   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:49.665194   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:49.665203   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:49.683821   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:49.683833   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:52.201726   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:52.178604   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:57.202635   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:57.202682   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:57.214438   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:07:57.214477   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:57.225919   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:07:57.225996   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:57.238811   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:07:57.238885   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:57.249922   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:07:57.249996   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:57.265893   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:07:57.265964   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:57.279182   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:07:57.279257   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:57.291027   12818 logs.go:276] 0 containers: []
	W0318 04:07:57.291041   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:57.291110   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:57.303109   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:07:57.303128   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:07:57.303135   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:07:57.342320   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:07:57.342338   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:07:57.354754   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:07:57.354767   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:07:57.367909   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:07:57.367923   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:07:57.384032   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:07:57.384044   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:07:57.395933   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:57.395949   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:57.432775   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:07:57.432787   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:07:57.447471   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:07:57.447484   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:07:57.460811   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:07:57.460824   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:07:57.477116   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:57.477129   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:57.501925   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:07:57.501938   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:57.514819   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:57.514833   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:57.519348   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:07:57.519361   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:07:57.534686   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:07:57.534698   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:07:57.554255   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:57.554272   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:57.596056   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:07:57.596069   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:07:57.607987   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:07:57.607998   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:07:57.180697   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:57.180803   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:57.191770   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:57.191845   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:57.202466   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:57.202538   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:57.213695   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:57.213781   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:57.228641   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:57.228711   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:57.239957   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:57.240001   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:57.251942   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:57.251999   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:57.262700   12861 logs.go:276] 0 containers: []
	W0318 04:07:57.262713   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:57.262778   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:57.280395   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:57.280412   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:57.280417   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:57.296195   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:57.296213   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:57.320988   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:57.321000   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:57.333545   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:57.333557   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:57.346326   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:57.346339   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:57.359123   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:57.359137   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:57.399701   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:57.399715   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:57.414243   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:57.414255   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:57.439820   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:57.439832   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:57.452482   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:57.452494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:57.467920   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:57.467937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:57.493840   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:57.493852   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:57.512363   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:57.512376   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:57.525379   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:57.525393   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:57.530227   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:57.530240   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:57.568930   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:57.568941   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:57.580665   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:57.580676   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:00.095714   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:00.123661   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:05.097992   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:05.098186   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:05.110115   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:05.110197   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:05.121295   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:05.121368   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:05.132712   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:05.132793   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:05.145284   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:05.145362   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:05.156739   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:05.156813   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:05.176498   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:05.176576   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:05.189026   12861 logs.go:276] 0 containers: []
	W0318 04:08:05.189039   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:05.189105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:05.200993   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:05.201014   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:05.201021   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:05.242892   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:05.242904   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:05.280748   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:05.280756   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:05.295441   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:05.295450   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:05.308189   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:05.308204   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:05.320898   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:05.320909   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:05.336382   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:05.336395   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:05.349385   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:05.349399   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:05.365160   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:05.365173   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:05.376882   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:05.376894   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:05.400684   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:05.400699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:05.427557   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:05.427567   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:05.443035   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:05.443045   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:05.456368   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:05.456380   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:05.460987   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:05.460996   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:05.473704   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:05.473714   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:05.493156   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:05.493167   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:05.125771   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:05.125855   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:05.137361   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:08:05.137442   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:05.152070   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:08:05.152144   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:05.164219   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:08:05.164293   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:05.181541   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:08:05.181617   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:05.192752   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:08:05.192821   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:05.204541   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:08:05.204614   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:05.215777   12818 logs.go:276] 0 containers: []
	W0318 04:08:05.215787   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:05.215850   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:05.231199   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:08:05.231216   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:05.231222   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:05.273964   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:05.273987   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:05.278872   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:08:05.278885   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:08:05.294557   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:08:05.294569   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:08:05.337100   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:08:05.337112   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:08:05.351540   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:08:05.351553   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:08:05.368005   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:08:05.368013   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:08:05.381978   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:08:05.381992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:08:05.396927   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:08:05.396937   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:08:05.409869   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:08:05.409881   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:08:05.425953   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:08:05.425964   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:08:05.439580   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:08:05.439595   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:08:05.459339   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:08:05.459347   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:08:05.472109   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:05.472121   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:05.496419   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:05.496437   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:05.532511   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:08:05.532524   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:08:05.551059   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:08:05.551071   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:08.062871   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:08.006708   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:13.065093   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:13.065147   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:13.077307   12818 logs.go:276] 2 containers: [bcc742beeb53 c18954a8ecc0]
	I0318 04:08:13.077379   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:13.089575   12818 logs.go:276] 2 containers: [e8fbba8cabd9 5cf992ffe371]
	I0318 04:08:13.089644   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:13.101825   12818 logs.go:276] 1 containers: [a39cb3663fca]
	I0318 04:08:13.101897   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:13.116193   12818 logs.go:276] 2 containers: [055e5acc2bc1 e020ee6903f4]
	I0318 04:08:13.116271   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:13.127525   12818 logs.go:276] 1 containers: [49651ffb9baa]
	I0318 04:08:13.127598   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:13.139106   12818 logs.go:276] 2 containers: [0a0ed488056b 756e804d2063]
	I0318 04:08:13.139181   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:13.153094   12818 logs.go:276] 0 containers: []
	W0318 04:08:13.153106   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:13.153168   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:13.164163   12818 logs.go:276] 2 containers: [4c5285cbbc5f ed7b20e26c35]
	I0318 04:08:13.164180   12818 logs.go:123] Gathering logs for storage-provisioner [4c5285cbbc5f] ...
	I0318 04:08:13.164188   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5285cbbc5f"
	I0318 04:08:13.176808   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:08:13.176820   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:13.007689   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:13.007926   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:13.031263   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:13.031385   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:13.047822   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:13.047896   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:13.061962   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:13.062040   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:13.073006   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:13.073083   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:13.084371   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:13.084442   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:13.095890   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:13.095961   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:13.107183   12861 logs.go:276] 0 containers: []
	W0318 04:08:13.107195   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:13.107258   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:13.118626   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:13.118645   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:13.118651   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:13.135315   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:13.135329   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:13.147544   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:13.147557   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:13.172882   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:13.172899   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:13.189108   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:13.189122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:13.202736   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:13.202750   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:13.223535   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:13.223549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:13.236879   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:13.236895   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:13.251853   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:13.251865   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:13.256348   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:13.256358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:13.294892   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:13.294902   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:13.310751   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:13.310764   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:13.324463   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:13.324475   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:13.364620   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:13.364636   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:13.391079   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:13.391091   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:13.404962   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:13.404975   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:13.418345   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:13.418361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:15.936138   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:13.190031   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:13.190042   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:13.195143   12818 logs.go:123] Gathering logs for kube-apiserver [bcc742beeb53] ...
	I0318 04:08:13.195154   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc742beeb53"
	I0318 04:08:13.211457   12818 logs.go:123] Gathering logs for kube-proxy [49651ffb9baa] ...
	I0318 04:08:13.211468   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49651ffb9baa"
	I0318 04:08:13.224425   12818 logs.go:123] Gathering logs for etcd [5cf992ffe371] ...
	I0318 04:08:13.224435   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf992ffe371"
	I0318 04:08:13.240248   12818 logs.go:123] Gathering logs for kube-scheduler [055e5acc2bc1] ...
	I0318 04:08:13.240256   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 055e5acc2bc1"
	I0318 04:08:13.253462   12818 logs.go:123] Gathering logs for kube-scheduler [e020ee6903f4] ...
	I0318 04:08:13.253472   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e020ee6903f4"
	I0318 04:08:13.269367   12818 logs.go:123] Gathering logs for kube-controller-manager [0a0ed488056b] ...
	I0318 04:08:13.269384   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a0ed488056b"
	I0318 04:08:13.292705   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:13.292729   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:13.316581   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:13.316595   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:13.358107   12818 logs.go:123] Gathering logs for kube-apiserver [c18954a8ecc0] ...
	I0318 04:08:13.358126   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18954a8ecc0"
	I0318 04:08:13.399225   12818 logs.go:123] Gathering logs for etcd [e8fbba8cabd9] ...
	I0318 04:08:13.399239   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8fbba8cabd9"
	I0318 04:08:13.415685   12818 logs.go:123] Gathering logs for storage-provisioner [ed7b20e26c35] ...
	I0318 04:08:13.415698   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed7b20e26c35"
	I0318 04:08:13.427705   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:13.427718   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:13.462613   12818 logs.go:123] Gathering logs for coredns [a39cb3663fca] ...
	I0318 04:08:13.462624   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39cb3663fca"
	I0318 04:08:13.474126   12818 logs.go:123] Gathering logs for kube-controller-manager [756e804d2063] ...
	I0318 04:08:13.474139   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756e804d2063"
	I0318 04:08:15.991676   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:20.938483   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:20.938747   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:20.959544   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:20.959649   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:20.975131   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:20.975209   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:20.989085   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:20.989159   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:21.000680   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:21.000766   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:21.012000   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:21.012078   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:21.023064   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:21.023140   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:21.034485   12861 logs.go:276] 0 containers: []
	W0318 04:08:21.034497   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:21.034559   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:21.045781   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:21.045803   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:21.045809   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:21.050320   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:21.050330   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:21.065290   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:21.065305   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:21.077408   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:21.077421   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:21.089401   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:21.089413   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:21.104403   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:21.104415   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:21.120345   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:21.120358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:21.144550   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:21.144565   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:21.184430   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:21.184450   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:21.210614   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:21.210635   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:21.226879   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:21.226893   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:21.239699   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:21.239712   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:21.258532   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:21.258545   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:21.276701   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:21.276714   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:21.337744   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:21.337756   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:21.353091   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:21.353107   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:21.368812   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:21.368829   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:20.993971   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:20.994009   12818 kubeadm.go:591] duration metric: took 4m5.056468333s to restartPrimaryControlPlane
	W0318 04:08:20.994043   12818 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:08:20.994057   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:08:22.039893   12818 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.045848917s)
	I0318 04:08:22.039972   12818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:08:22.044993   12818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:08:22.047915   12818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:08:22.050537   12818 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:08:22.050543   12818 kubeadm.go:156] found existing configuration files:
	
	I0318 04:08:22.050566   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf
	I0318 04:08:22.053353   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:08:22.053380   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:08:22.055880   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf
	I0318 04:08:22.058567   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:08:22.058592   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:08:22.062050   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf
	I0318 04:08:22.064637   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:08:22.064656   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:08:22.067047   12818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf
	I0318 04:08:22.069515   12818 kubeadm.go:162] "https://control-plane.minikube.internal:52301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:08:22.069539   12818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:08:22.071962   12818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:08:22.087684   12818 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:08:22.087713   12818 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:08:22.148760   12818 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:08:22.148812   12818 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:08:22.148876   12818 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:08:22.201192   12818 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:08:22.205202   12818 out.go:204]   - Generating certificates and keys ...
	I0318 04:08:22.205244   12818 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:08:22.205285   12818 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:08:22.205357   12818 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:08:22.205408   12818 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:08:22.205461   12818 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:08:22.205611   12818 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:08:22.205657   12818 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:08:22.205689   12818 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:08:22.205733   12818 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:08:22.205779   12818 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:08:22.205806   12818 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:08:22.205852   12818 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:08:22.300092   12818 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:08:22.391764   12818 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:08:22.424820   12818 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:08:22.544721   12818 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:08:22.572654   12818 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:08:22.572939   12818 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:08:22.572977   12818 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:08:22.658487   12818 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:08:22.665543   12818 out.go:204]   - Booting up control plane ...
	I0318 04:08:22.665596   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:08:22.665637   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:08:22.665677   12818 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:08:22.665724   12818 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:08:22.665807   12818 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:08:23.883204   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:27.664501   12818 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.002330 seconds
	I0318 04:08:27.664595   12818 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:08:27.669974   12818 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:08:28.177818   12818 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:08:28.178082   12818 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-727000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:08:28.684052   12818 kubeadm.go:309] [bootstrap-token] Using token: 7fi7vs.pubje3x3gw34rkjb
	I0318 04:08:28.687174   12818 out.go:204]   - Configuring RBAC rules ...
	I0318 04:08:28.687243   12818 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:08:28.687303   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:08:28.690063   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:08:28.695348   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:08:28.696320   12818 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:08:28.697166   12818 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:08:28.700584   12818 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:08:28.877327   12818 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:08:29.089808   12818 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:08:29.090420   12818 kubeadm.go:309] 
	I0318 04:08:29.090456   12818 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:08:29.090459   12818 kubeadm.go:309] 
	I0318 04:08:29.090495   12818 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:08:29.090499   12818 kubeadm.go:309] 
	I0318 04:08:29.090511   12818 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:08:29.090539   12818 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:08:29.090680   12818 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:08:29.090699   12818 kubeadm.go:309] 
	I0318 04:08:29.090754   12818 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:08:29.090762   12818 kubeadm.go:309] 
	I0318 04:08:29.090790   12818 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:08:29.090793   12818 kubeadm.go:309] 
	I0318 04:08:29.090820   12818 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:08:29.090887   12818 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:08:29.090967   12818 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:08:29.090978   12818 kubeadm.go:309] 
	I0318 04:08:29.091040   12818 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:08:29.091085   12818 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:08:29.091088   12818 kubeadm.go:309] 
	I0318 04:08:29.091127   12818 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7fi7vs.pubje3x3gw34rkjb \
	I0318 04:08:29.091182   12818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 \
	I0318 04:08:29.091195   12818 kubeadm.go:309] 	--control-plane 
	I0318 04:08:29.091198   12818 kubeadm.go:309] 
	I0318 04:08:29.091243   12818 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:08:29.091251   12818 kubeadm.go:309] 
	I0318 04:08:29.091292   12818 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7fi7vs.pubje3x3gw34rkjb \
	I0318 04:08:29.091404   12818 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 
	I0318 04:08:29.091453   12818 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:08:29.091461   12818 cni.go:84] Creating CNI manager for ""
	I0318 04:08:29.091468   12818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:08:29.094926   12818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:08:29.102059   12818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:08:29.106361   12818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:08:29.112320   12818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:08:29.112456   12818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-727000 minikube.k8s.io/updated_at=2024_03_18T04_08_29_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=running-upgrade-727000 minikube.k8s.io/primary=true
	I0318 04:08:29.112469   12818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:08:29.159654   12818 ops.go:34] apiserver oom_adj: -16
	I0318 04:08:29.159691   12818 kubeadm.go:1107] duration metric: took 47.345958ms to wait for elevateKubeSystemPrivileges
	W0318 04:08:29.160013   12818 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:08:29.160056   12818 kubeadm.go:393] duration metric: took 4m13.237048625s to StartCluster
	I0318 04:08:29.160113   12818 settings.go:142] acquiring lock: {Name:mk6b2370db0877d87697e89b229a47432418434b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:08:29.160283   12818 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:08:29.161708   12818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:08:29.162002   12818 config.go:182] Loaded profile config "running-upgrade-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:08:29.162050   12818 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:08:29.165982   12818 out.go:177] * Verifying Kubernetes components...
	I0318 04:08:29.162236   12818 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:08:29.173932   12818 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-727000"
	I0318 04:08:29.173954   12818 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-727000"
	I0318 04:08:29.173962   12818 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-727000"
	W0318 04:08:29.173970   12818 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:08:29.173971   12818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:08:29.173976   12818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-727000"
	I0318 04:08:29.174020   12818 host.go:66] Checking if "running-upgrade-727000" exists ...
	I0318 04:08:29.175411   12818 kapi.go:59] client config for running-upgrade-727000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/running-upgrade-727000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106332a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:08:29.175549   12818 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-727000"
	W0318 04:08:29.175555   12818 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:08:29.175565   12818 host.go:66] Checking if "running-upgrade-727000" exists ...
	I0318 04:08:29.179986   12818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:08:28.885311   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:28.885434   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:28.897309   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:28.897393   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:28.908644   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:28.908721   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:28.919982   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:28.920051   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:28.931154   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:28.931242   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:28.943378   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:28.943459   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:28.961053   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:28.961131   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:28.974904   12861 logs.go:276] 0 containers: []
	W0318 04:08:28.974918   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:28.974984   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:28.990294   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:28.990314   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:28.990320   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:29.017236   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:29.017248   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:29.034687   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:29.034702   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:29.048588   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:29.048603   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:29.060722   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:29.060735   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:29.072750   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:29.072764   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:29.112578   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:29.112589   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:29.117804   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:29.117819   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:29.137548   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:29.137563   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:29.150443   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:29.150457   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:29.166217   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:29.166225   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:29.178569   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:29.178579   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:29.191641   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:29.191653   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:29.215444   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:29.215461   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:29.251935   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:29.251947   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:29.272705   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:29.272720   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:29.285407   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:29.285419   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:29.183124   12818 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:08:29.183136   12818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:08:29.183151   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:08:29.184211   12818 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:08:29.184218   12818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:08:29.184223   12818 sshutil.go:53] new ssh client: &{IP:localhost Port:52269 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/running-upgrade-727000/id_rsa Username:docker}
	I0318 04:08:29.267317   12818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:08:29.273351   12818 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:08:29.273413   12818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:08:29.277773   12818 api_server.go:72] duration metric: took 115.708125ms to wait for apiserver process to appear ...
	I0318 04:08:29.277788   12818 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:08:29.277796   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:29.289074   12818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:08:29.289089   12818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:08:31.806858   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:34.278800   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:34.278851   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:36.809234   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:36.809624   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:36.847722   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:36.847867   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:36.869276   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:36.869381   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:36.884438   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:36.884521   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:36.897404   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:36.897475   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:36.908481   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:36.908550   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:36.919347   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:36.919413   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:36.929754   12861 logs.go:276] 0 containers: []
	W0318 04:08:36.929772   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:36.929830   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:36.940790   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:36.940808   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:36.940814   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:36.954990   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:36.955001   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:36.981586   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:36.981597   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:36.999115   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:36.999126   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:37.010614   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:37.010627   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:37.022362   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:37.022374   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:37.058010   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:37.058022   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:37.073309   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:37.073319   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:37.089595   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:37.089606   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:37.094133   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:37.094142   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:37.117108   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:37.117118   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:37.140604   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:37.140612   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:37.155317   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:37.155332   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:37.167895   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:37.167907   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:37.180509   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:37.180521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:37.193517   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:37.193529   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:37.205317   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:37.205329   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:39.744558   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:39.279646   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:39.279671   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:44.746682   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:44.746792   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:44.760740   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:44.760815   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:44.772482   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:44.772555   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:44.782740   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:44.782810   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:44.793145   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:44.793220   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:44.803760   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:44.803827   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:44.814525   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:44.814595   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:44.825270   12861 logs.go:276] 0 containers: []
	W0318 04:08:44.825283   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:44.825346   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:44.835786   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:44.835808   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:44.835813   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:44.847212   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:44.847227   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:44.864388   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:44.864400   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:44.875425   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:44.875435   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:44.898505   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:44.898518   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:44.902738   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:44.902750   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:44.915777   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:44.915789   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:44.928389   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:44.928399   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:44.939975   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:44.939984   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:44.954998   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:44.955009   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:44.992971   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:44.992983   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:45.007157   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:45.007166   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:45.021883   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:45.021896   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:45.033795   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:45.033807   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:45.045813   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:45.045827   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:45.083413   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:45.083432   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:45.111047   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:45.111058   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:44.279794   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:44.279834   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:47.627060   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:49.280097   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:49.280144   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:52.629221   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:52.629395   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:52.641210   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:52.641292   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:52.652073   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:52.652142   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:52.662760   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:52.662841   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:52.673501   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:52.673575   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:52.684786   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:52.684862   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:52.695769   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:52.695845   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:52.708632   12861 logs.go:276] 0 containers: []
	W0318 04:08:52.708645   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:52.708704   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:52.719202   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:52.719224   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:52.719230   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:52.733666   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:52.733679   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:52.745854   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:52.745870   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:52.761337   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:52.761350   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:52.774123   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:52.774135   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:52.789531   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:52.789541   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:52.805333   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:52.805345   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:52.817152   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:52.817163   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:52.854889   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:52.854898   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:52.890138   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:52.890155   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:52.905140   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:52.905154   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:52.923655   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:52.923667   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:52.944874   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:52.944882   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:52.960577   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:52.960592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:52.978683   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:52.978699   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:52.982846   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:52.982852   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:53.013277   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:53.013288   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:55.530088   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:54.280498   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:54.280547   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:59.280991   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:59.281042   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:08:59.640067   12818 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:08:59.644321   12818 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:09:00.531440   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:00.531531   12861 kubeadm.go:591] duration metric: took 4m3.750867375s to restartPrimaryControlPlane
	W0318 04:09:00.531600   12861 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:09:00.531633   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:09:01.575372   12861 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.043709417s)
	I0318 04:09:01.575453   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:09:01.580409   12861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:09:01.583106   12861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:09:01.585641   12861 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:09:01.585647   12861 kubeadm.go:156] found existing configuration files:
	
	I0318 04:09:01.585672   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0318 04:09:01.588599   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:09:01.588623   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:09:01.591494   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0318 04:09:01.594026   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:09:01.594046   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:09:01.597139   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0318 04:09:01.600192   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:09:01.600216   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:09:01.602667   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0318 04:09:01.605386   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:09:01.605408   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:09:01.608434   12861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:09:01.626784   12861 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:09:01.626856   12861 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:09:01.676927   12861 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:09:01.677061   12861 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:09:01.677109   12861 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:09:01.725324   12861 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:09:01.729581   12861 out.go:204]   - Generating certificates and keys ...
	I0318 04:09:01.729619   12861 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:09:01.729650   12861 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:09:01.729687   12861 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:09:01.729720   12861 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:09:01.729759   12861 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:09:01.729786   12861 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:09:01.729817   12861 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:09:01.729854   12861 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:09:01.729899   12861 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:09:01.729937   12861 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:09:01.729957   12861 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:09:01.729981   12861 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:09:01.814116   12861 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:09:01.957309   12861 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:09:01.993746   12861 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:09:02.156018   12861 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:09:02.188633   12861 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:09:02.189086   12861 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:09:02.189119   12861 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:09:02.271065   12861 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:08:59.652225   12818 addons.go:505] duration metric: took 30.4909315s for enable addons: enabled=[storage-provisioner]
	I0318 04:09:02.279240   12861 out.go:204]   - Booting up control plane ...
	I0318 04:09:02.279293   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:09:02.279351   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:09:02.279468   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:09:02.279568   12861 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:09:02.279665   12861 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:09:06.776469   12861 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501020 seconds
	I0318 04:09:06.776593   12861 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:09:06.780246   12861 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:09:07.287395   12861 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:09:07.287500   12861 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-484000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:09:07.792058   12861 kubeadm.go:309] [bootstrap-token] Using token: nbki6v.fi2n2wh3y67bjfsb
	I0318 04:09:04.281790   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:04.281811   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:07.797951   12861 out.go:204]   - Configuring RBAC rules ...
	I0318 04:09:07.798013   12861 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:09:07.798062   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:09:07.801554   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:09:07.802504   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:09:07.803363   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:09:07.804196   12861 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:09:07.808543   12861 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:09:07.954953   12861 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:09:08.195595   12861 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:09:08.196032   12861 kubeadm.go:309] 
	I0318 04:09:08.196075   12861 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:09:08.196079   12861 kubeadm.go:309] 
	I0318 04:09:08.196120   12861 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:09:08.196127   12861 kubeadm.go:309] 
	I0318 04:09:08.196139   12861 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:09:08.196164   12861 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:09:08.196189   12861 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:09:08.196193   12861 kubeadm.go:309] 
	I0318 04:09:08.196223   12861 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:09:08.196227   12861 kubeadm.go:309] 
	I0318 04:09:08.196258   12861 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:09:08.196262   12861 kubeadm.go:309] 
	I0318 04:09:08.196289   12861 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:09:08.196325   12861 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:09:08.196379   12861 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:09:08.196382   12861 kubeadm.go:309] 
	I0318 04:09:08.196428   12861 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:09:08.196466   12861 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:09:08.196468   12861 kubeadm.go:309] 
	I0318 04:09:08.196510   12861 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nbki6v.fi2n2wh3y67bjfsb \
	I0318 04:09:08.196571   12861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 \
	I0318 04:09:08.196583   12861 kubeadm.go:309] 	--control-plane 
	I0318 04:09:08.196586   12861 kubeadm.go:309] 
	I0318 04:09:08.196647   12861 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:09:08.196651   12861 kubeadm.go:309] 
	I0318 04:09:08.196701   12861 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nbki6v.fi2n2wh3y67bjfsb \
	I0318 04:09:08.196758   12861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 
	I0318 04:09:08.196817   12861 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:09:08.196823   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:09:08.196832   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:09:08.200945   12861 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:09:08.207956   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:09:08.210878   12861 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:09:08.215602   12861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:09:08.215644   12861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:09:08.215676   12861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-484000 minikube.k8s.io/updated_at=2024_03_18T04_09_08_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=stopped-upgrade-484000 minikube.k8s.io/primary=true
	I0318 04:09:08.262702   12861 kubeadm.go:1107] duration metric: took 47.093083ms to wait for elevateKubeSystemPrivileges
	I0318 04:09:08.262722   12861 ops.go:34] apiserver oom_adj: -16
	W0318 04:09:08.262737   12861 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:09:08.262742   12861 kubeadm.go:393] duration metric: took 4m11.495364792s to StartCluster
	I0318 04:09:08.262752   12861 settings.go:142] acquiring lock: {Name:mk6b2370db0877d87697e89b229a47432418434b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:09:08.262835   12861 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:09:08.263228   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:09:08.263686   12861 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:09:08.267742   12861 out.go:177] * Verifying Kubernetes components...
	I0318 04:09:08.263762   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:09:08.263734   12861 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:09:08.267754   12861 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-484000"
	I0318 04:09:08.267769   12861 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-484000"
	W0318 04:09:08.267774   12861 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:09:08.267794   12861 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0318 04:09:08.267814   12861 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-484000"
	I0318 04:09:08.275894   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:09:08.275938   12861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-484000"
	I0318 04:09:08.279897   12861 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:09:08.281138   12861 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10256ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:09:08.283938   12861 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:09:08.283943   12861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:09:08.283950   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:09:08.284001   12861 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-484000"
	W0318 04:09:08.284008   12861 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:09:08.284018   12861 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0318 04:09:08.284751   12861 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:09:08.284756   12861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:09:08.284760   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:09:08.364813   12861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:09:08.370749   12861 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:09:08.370794   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:09:08.374585   12861 api_server.go:72] duration metric: took 110.889666ms to wait for apiserver process to appear ...
	I0318 04:09:08.374593   12861 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:09:08.374600   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:08.438452   12861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:09:08.440433   12861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:09:09.282585   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:09.282628   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:13.376626   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:13.376675   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:14.282844   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:14.282889   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:18.376948   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:18.377002   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:19.284105   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:19.284132   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:23.377273   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:23.377331   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:24.285594   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:24.285633   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:28.377747   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:28.377770   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:29.287525   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:29.287626   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:29.300693   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:29.300768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:29.311170   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:29.311243   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:29.322097   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:29.322175   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:29.332154   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:29.332222   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:29.342859   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:29.342937   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:29.353163   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:29.353238   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:29.363429   12818 logs.go:276] 0 containers: []
	W0318 04:09:29.363441   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:29.363501   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:29.374171   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:29.374186   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:29.374192   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:29.378888   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:29.378894   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:29.394064   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:29.394076   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:29.405929   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:29.405944   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:29.420963   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:29.420974   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:29.443806   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:29.443813   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:29.455125   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:29.455138   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:29.467034   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:29.467048   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:29.503358   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:29.503367   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:29.538111   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:29.538125   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:29.555078   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:29.555088   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:29.566773   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:29.566787   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:29.578360   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:29.578372   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:32.100067   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:33.378322   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:33.378381   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:37.102278   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:37.102429   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:37.118540   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:37.118632   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:37.138002   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:37.138071   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:37.148577   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:37.148651   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:37.160339   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:37.160416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:37.170523   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:37.170597   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:37.181619   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:37.181687   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:37.192507   12818 logs.go:276] 0 containers: []
	W0318 04:09:37.192521   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:37.192583   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:37.203684   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:37.203703   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:37.203710   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:37.215557   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:37.215567   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:37.240635   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:37.240645   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:37.275542   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:37.275555   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:37.290187   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:37.290198   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:37.307909   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:37.307920   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:37.321849   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:37.321861   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:37.340638   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:37.340650   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:37.352547   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:37.352558   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:37.367438   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:37.367450   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:37.378532   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:37.378546   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:37.415360   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:37.415368   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:37.419705   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:37.419715   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:38.379044   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:38.379075   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:09:38.827659   12861 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:09:38.832003   12861 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:09:38.841031   12861 addons.go:505] duration metric: took 30.5780685s for enable addons: enabled=[storage-provisioner]
	I0318 04:09:39.936266   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:43.379933   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:43.380016   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:44.938555   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:44.938852   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:44.979301   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:44.979435   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:45.008250   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:45.008327   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:45.022273   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:45.022345   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:45.033760   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:45.033829   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:45.045503   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:45.045580   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:45.056466   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:45.056531   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:45.067021   12818 logs.go:276] 0 containers: []
	W0318 04:09:45.067036   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:45.067095   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:45.078722   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:45.078739   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:45.078744   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:45.114603   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:45.114614   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:45.128719   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:45.128730   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:45.140118   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:45.140129   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:45.151819   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:45.151830   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:45.169891   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:45.169902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:45.182107   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:45.182119   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:45.221571   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:45.221582   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:45.226692   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:45.226699   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:45.242631   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:45.242642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:45.253659   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:45.253670   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:45.268968   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:45.268978   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:45.292146   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:45.292155   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:47.806020   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:48.381147   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:48.381175   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:52.808242   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:52.808453   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:09:52.825177   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:09:52.825255   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:09:52.837705   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:09:52.837776   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:09:52.849230   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:09:52.849305   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:09:52.859813   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:09:52.859882   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:09:52.870576   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:09:52.870639   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:09:52.895048   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:09:52.895127   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:09:52.907251   12818 logs.go:276] 0 containers: []
	W0318 04:09:52.907266   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:09:52.907331   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:09:52.918300   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:09:52.918316   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:09:52.918320   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:09:52.953875   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:09:52.953886   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:09:52.958787   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:09:52.958794   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:09:52.997850   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:09:52.997862   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:09:53.016479   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:09:53.016490   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:09:53.029281   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:09:53.029293   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:09:53.049580   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:09:53.049591   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:09:53.061240   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:09:53.061252   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:09:53.086026   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:09:53.086040   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:09:53.099829   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:09:53.099840   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:09:53.114119   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:09:53.114130   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:09:53.125920   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:09:53.125931   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:09:53.140591   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:09:53.140601   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:09:53.382618   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:53.382648   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:55.654662   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:58.384696   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:58.384725   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:00.655404   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:00.655679   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:00.682674   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:00.682789   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:00.698183   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:00.698265   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:00.711530   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:00.711617   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:00.722548   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:00.722625   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:00.733400   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:00.733477   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:00.743932   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:00.743993   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:00.754433   12818 logs.go:276] 0 containers: []
	W0318 04:10:00.754451   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:00.754510   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:00.765327   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:00.765342   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:00.765347   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:00.777432   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:00.777444   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:00.794525   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:00.794535   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:00.811900   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:00.811911   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:00.835975   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:00.835986   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:00.872857   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:00.872867   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:00.877382   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:00.877389   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:00.891731   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:00.891741   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:00.903534   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:00.903545   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:00.915243   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:00.915254   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:00.926689   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:00.926701   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:00.938380   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:00.938391   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:00.975711   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:00.975723   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:03.386797   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:03.386847   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:03.496353   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:08.383325   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:08.383437   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:08.394902   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:08.394976   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:08.405838   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:08.405917   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:08.418165   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:08.418238   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:08.428989   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:08.429065   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:08.439589   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:08.439663   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:08.450799   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:08.450877   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:08.461415   12861 logs.go:276] 0 containers: []
	W0318 04:10:08.461426   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:08.461507   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:08.471997   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:08.472014   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:08.472021   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:08.490186   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:08.490196   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:08.502194   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:08.502208   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:08.539344   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:08.539368   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:08.544581   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:08.544592   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:08.584004   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:08.584018   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:08.599045   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:08.599061   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:08.611987   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:08.612000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:08.626206   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:08.626219   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:08.651549   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:08.651562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:08.664508   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:08.664521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:08.680502   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:08.680515   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:08.694911   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:08.694922   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:11.217325   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:08.492493   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:08.492576   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:08.503954   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:08.504024   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:08.516083   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:08.516158   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:08.526790   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:08.526861   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:08.542478   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:08.542550   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:08.554626   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:08.554705   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:08.566890   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:08.566961   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:08.578064   12818 logs.go:276] 0 containers: []
	W0318 04:10:08.578076   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:08.578136   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:08.589448   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:08.589464   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:08.589470   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:08.601755   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:08.601766   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:08.640289   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:08.640308   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:08.681650   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:08.681658   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:08.696758   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:08.696768   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:08.711369   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:08.711381   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:08.727688   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:08.727699   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:08.751135   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:08.751151   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:08.762652   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:08.762663   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:08.767449   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:08.767458   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:08.786328   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:08.786339   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:08.801406   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:08.801417   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:08.813372   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:08.813383   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:11.328663   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:16.211836   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:16.212055   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:16.228540   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:16.228621   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:16.240157   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:16.240230   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:16.251040   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:16.251105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:16.261348   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:16.261411   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:16.271555   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:16.271628   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:16.281587   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:16.281651   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:16.291812   12861 logs.go:276] 0 containers: []
	W0318 04:10:16.291824   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:16.291886   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:16.307461   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:16.307479   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:16.307484   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:16.318556   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:16.318569   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:16.340813   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:16.340825   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:16.353170   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:16.353183   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:16.378316   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:16.378336   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:16.414218   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:16.414236   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:16.419201   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:16.419211   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:16.457549   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:16.457562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:16.472202   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:16.472213   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:16.485509   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:16.485521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:16.502163   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:16.502174   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:16.515352   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:16.515361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:16.323169   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:16.323311   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:16.335020   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:16.335092   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:16.346979   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:16.347047   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:16.359850   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:16.359940   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:16.371211   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:16.371280   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:16.382630   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:16.382706   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:16.394419   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:16.394490   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:16.405281   12818 logs.go:276] 0 containers: []
	W0318 04:10:16.405292   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:16.405348   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:16.416658   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:16.416674   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:16.416681   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:16.433109   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:16.433125   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:16.447091   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:16.447105   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:16.472971   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:16.472980   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:16.513348   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:16.513370   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:16.518979   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:16.518992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:16.535975   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:16.535986   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:16.548288   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:16.548299   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:16.560331   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:16.560342   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:16.573076   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:16.573088   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:16.611920   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:16.611929   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:16.626758   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:16.626771   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:16.644979   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:16.644991   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:16.531310   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:16.531322   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:19.043659   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:19.156115   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:24.040662   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:24.040892   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:24.062636   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:24.062734   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:24.076610   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:24.076691   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:24.087829   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:24.087897   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:24.098173   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:24.098241   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:24.108748   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:24.108819   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:24.119215   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:24.119289   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:24.129528   12861 logs.go:276] 0 containers: []
	W0318 04:10:24.129539   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:24.129599   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:24.140182   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:24.140198   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:24.140204   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:24.175356   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:24.175372   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:24.189941   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:24.189951   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:24.208991   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:24.209007   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:24.221653   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:24.221665   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:24.237768   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:24.237779   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:24.251824   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:24.251836   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:24.264123   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:24.264136   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:24.269148   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:24.269159   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:24.307294   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:24.307308   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:24.321953   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:24.321969   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:24.338008   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:24.338019   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:24.363792   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:24.363810   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:24.153086   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:24.153167   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:24.164593   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:24.164664   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:24.175832   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:24.175899   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:24.187896   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:24.187972   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:24.199906   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:24.199984   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:24.210869   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:24.210944   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:24.227375   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:24.227449   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:24.240836   12818 logs.go:276] 0 containers: []
	W0318 04:10:24.240847   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:24.240910   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:24.257689   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:24.257706   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:24.257712   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:24.270495   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:24.270503   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:24.284569   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:24.284584   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:24.296741   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:24.296753   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:24.301955   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:24.301967   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:24.339140   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:24.339150   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:24.354040   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:24.354052   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:24.368705   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:24.368717   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:24.383461   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:24.383472   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:24.395148   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:24.395158   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:24.431194   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:24.431201   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:24.445682   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:24.445692   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:24.464106   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:24.464119   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:26.989566   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:26.876806   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:31.987022   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:31.987108   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:31.998709   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:31.998783   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:32.011462   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:32.011532   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:32.022866   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:32.022946   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:32.035089   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:32.035165   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:32.047058   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:32.047133   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:32.058619   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:32.058685   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:32.070008   12818 logs.go:276] 0 containers: []
	W0318 04:10:32.070020   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:32.070087   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:32.081798   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:32.081814   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:32.081820   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:32.094509   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:32.094518   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:32.107302   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:32.107315   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:32.146457   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:32.146472   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:32.151561   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:32.151572   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:32.167178   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:32.167197   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:32.180404   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:32.180417   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:32.193335   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:32.193348   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:32.216431   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:32.216441   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:32.241904   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:32.241912   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:32.277046   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:32.277061   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:32.292083   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:32.292095   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:32.303986   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:32.303996   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:31.876175   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:31.876329   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:31.892212   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:31.892296   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:31.904475   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:31.904552   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:31.915701   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:31.915773   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:31.925973   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:31.926037   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:31.936314   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:31.936380   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:31.947134   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:31.947202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:31.957206   12861 logs.go:276] 0 containers: []
	W0318 04:10:31.957219   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:31.957274   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:31.967794   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:31.967811   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:31.967816   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:31.978945   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:31.978956   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:32.013871   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:32.013880   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:32.019579   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:32.019591   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:32.058494   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:32.058507   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:32.078025   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:32.078036   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:32.093623   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:32.093641   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:32.119013   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:32.119025   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:32.130934   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:32.130949   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:32.144180   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:32.144192   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:32.156796   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:32.156809   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:32.172696   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:32.172707   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:32.187185   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:32.187205   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:34.706686   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:34.823347   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:39.707122   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:39.707240   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:39.718813   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:39.718890   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:39.729474   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:39.729539   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:39.739888   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:39.739954   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:39.750293   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:39.750360   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:39.760918   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:39.760988   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:39.771375   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:39.771445   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:39.781411   12861 logs.go:276] 0 containers: []
	W0318 04:10:39.781424   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:39.781482   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:39.791834   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:39.791850   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:39.791855   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:39.803390   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:39.803401   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:39.820467   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:39.820480   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:39.857383   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:39.857401   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:39.896025   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:39.896036   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:39.913451   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:39.913463   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:39.926047   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:39.926058   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:39.938019   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:39.938031   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:39.953007   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:39.953019   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:39.977677   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:39.977692   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:39.990559   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:39.990570   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:39.995334   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:39.995346   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:40.013251   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:40.013263   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:39.823710   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:39.823787   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:39.835434   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:39.835507   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:39.846491   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:39.846558   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:39.857662   12818 logs.go:276] 2 containers: [c4569429f40f 9e480ed55df6]
	I0318 04:10:39.857728   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:39.869426   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:39.869502   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:39.880601   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:39.880681   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:39.892789   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:39.892863   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:39.904615   12818 logs.go:276] 0 containers: []
	W0318 04:10:39.904630   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:39.904694   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:39.915660   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:39.915676   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:39.915682   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:39.954622   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:39.954633   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:39.991618   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:39.991625   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:40.007643   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:40.007660   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:40.020086   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:40.020098   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:40.033175   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:40.033185   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:40.051306   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:40.051316   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:40.062715   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:40.062725   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:40.067119   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:40.067128   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:40.081240   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:40.081251   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:40.093238   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:40.093250   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:40.107586   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:40.107597   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:40.119524   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:40.119533   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:42.644836   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:42.527351   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:47.645890   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:47.645973   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:47.658157   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:47.658226   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:47.669965   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:47.670035   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:47.681768   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:10:47.681840   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:47.693221   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:47.693286   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:47.705550   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:47.705625   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:47.717282   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:47.717347   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:47.728203   12818 logs.go:276] 0 containers: []
	W0318 04:10:47.728220   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:47.728282   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:47.740259   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:47.740277   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:47.740282   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:47.781385   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:47.781398   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:47.786791   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:10:47.786803   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:10:47.799028   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:47.799039   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:47.811659   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:47.811670   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:47.823940   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:47.823952   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:47.864178   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:47.864192   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:47.879055   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:10:47.879070   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:10:47.891054   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:47.891068   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:47.902809   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:47.902819   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:47.914630   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:47.914640   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:47.939396   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:47.939403   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:47.953610   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:47.953623   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:47.968887   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:47.968902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:47.990659   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:47.990673   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:47.528541   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:47.528722   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:47.541461   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:47.541545   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:47.557156   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:47.557223   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:47.567613   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:47.567683   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:47.578972   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:47.579041   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:47.589590   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:47.589663   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:47.600107   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:47.600177   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:47.615946   12861 logs.go:276] 0 containers: []
	W0318 04:10:47.615958   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:47.616018   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:47.630103   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:47.630118   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:47.630123   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:47.635322   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:47.635331   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:47.673013   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:47.673024   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:47.688414   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:47.688427   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:47.700998   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:47.701010   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:47.716305   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:47.716323   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:47.728926   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:47.728935   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:47.740954   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:47.740963   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:47.777005   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:47.777017   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:47.796753   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:47.796770   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:47.809320   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:47.809334   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:47.828074   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:47.828086   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:47.846353   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:47.846366   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:50.375000   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:50.504349   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:55.376520   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:55.376702   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:55.388134   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:55.388218   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:55.398985   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:55.399052   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:55.410427   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:55.410495   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:55.420964   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:55.421026   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:55.433183   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:55.433257   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:55.446015   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:55.446087   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:55.456579   12861 logs.go:276] 0 containers: []
	W0318 04:10:55.456591   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:55.456652   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:55.470019   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:55.470034   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:55.470040   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:55.481478   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:55.481489   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:55.498699   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:55.498710   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:55.534919   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:55.534936   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:55.539995   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:55.540005   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:55.560249   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:55.560261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:55.574042   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:55.574052   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:55.592501   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:55.592514   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:55.610280   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:55.610296   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:55.635087   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:55.635105   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:55.647959   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:55.647973   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:55.686487   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:55.686499   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:55.702446   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:55.702460   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:55.503906   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:55.503988   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:55.515496   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:10:55.515568   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:55.526843   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:10:55.526910   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:55.538038   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:10:55.538118   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:55.550150   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:10:55.550230   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:55.561640   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:10:55.561711   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:55.573292   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:10:55.573369   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:55.585483   12818 logs.go:276] 0 containers: []
	W0318 04:10:55.585516   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:55.585590   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:55.597613   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:10:55.597629   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:55.597635   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:55.602954   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:10:55.602965   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:10:55.615703   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:10:55.615713   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:10:55.642509   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:10:55.642527   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:10:55.656413   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:10:55.656425   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:10:55.668984   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:55.668995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:55.696834   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:10:55.696853   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:55.714030   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:55.714042   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:55.751630   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:10:55.751653   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:10:55.763247   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:10:55.763259   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:10:55.775095   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:10:55.775106   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:10:55.789175   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:10:55.789187   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:10:55.802879   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:10:55.802891   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:10:55.820501   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:55.820513   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:55.855071   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:10:55.855085   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:10:58.216993   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:58.368919   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:03.218703   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:03.219007   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:03.236861   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:03.236954   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:03.250307   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:03.250381   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:03.261880   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:03.261953   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:03.272199   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:03.272268   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:03.282691   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:03.282762   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:03.300788   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:03.300860   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:03.311582   12861 logs.go:276] 0 containers: []
	W0318 04:11:03.311600   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:03.311658   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:03.322460   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:03.322477   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:03.322482   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:03.359100   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:03.359112   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:03.396435   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:03.396446   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:03.412003   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:03.412018   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:03.427341   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:03.427355   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:03.440596   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:03.440609   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:03.453107   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:03.453122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:03.466043   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:03.466054   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:03.479079   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:03.479092   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:03.504477   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:03.504491   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:03.520126   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:03.520139   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:03.525236   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:03.525244   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:03.541057   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:03.541069   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:06.061287   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:03.370592   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:03.370727   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:03.382292   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:03.382370   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:03.393731   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:03.393803   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:03.405086   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:03.405161   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:03.416759   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:03.416830   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:03.431008   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:03.431084   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:03.443120   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:03.443196   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:03.454356   12818 logs.go:276] 0 containers: []
	W0318 04:11:03.454366   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:03.454424   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:03.470256   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:03.470272   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:03.470277   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:03.485201   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:03.485213   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:03.497900   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:03.497918   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:03.510052   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:03.510064   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:03.523696   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:03.523707   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:03.549439   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:03.549456   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:03.589332   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:03.589343   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:03.603805   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:03.603818   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:03.615592   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:03.615604   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:03.631039   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:03.631050   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:03.635810   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:03.635817   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:03.670983   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:03.670992   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:03.683247   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:03.683258   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:03.701866   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:03.701877   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:03.719714   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:03.719724   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:06.233355   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:11.063242   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:11.063418   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:11.079638   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:11.079719   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:11.096462   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:11.096537   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:11.107857   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:11.107932   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:11.121720   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:11.121799   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:11.132794   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:11.132862   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:11.148042   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:11.148113   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:11.158925   12861 logs.go:276] 0 containers: []
	W0318 04:11:11.158940   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:11.159006   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:11.169855   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:11.169875   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:11.169881   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:11.204023   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:11.204032   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:11.218124   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:11.218134   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:11.232416   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:11.232426   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:11.247855   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:11.247865   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:11.260312   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:11.260323   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:11.272686   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:11.272698   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:11.285897   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:11.285908   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:11.323684   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:11.323701   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:11.328441   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:11.328454   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:11.340952   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:11.340965   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:11.361050   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:11.361062   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:11.381060   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:11.381078   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:11.235171   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:11.235267   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:11.246699   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:11.246775   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:11.257772   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:11.257840   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:11.269675   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:11.269751   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:11.284928   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:11.284998   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:11.296609   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:11.296678   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:11.307149   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:11.307218   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:11.318254   12818 logs.go:276] 0 containers: []
	W0318 04:11:11.318266   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:11.318323   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:11.329345   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:11.329360   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:11.329364   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:11.345025   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:11.345037   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:11.371592   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:11.371606   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:11.410183   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:11.410194   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:11.428429   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:11.428441   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:11.442827   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:11.442838   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:11.454647   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:11.454657   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:11.492433   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:11.492442   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:11.509755   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:11.509765   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:11.525730   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:11.525740   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:11.543182   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:11.543193   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:11.547803   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:11.547812   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:11.561918   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:11.561930   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:11.573766   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:11.573775   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:11.585543   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:11.585556   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:13.908239   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:14.099348   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:18.909726   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:18.909928   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:18.922590   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:18.922674   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:18.933351   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:18.933422   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:18.947466   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:18.947538   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:18.957931   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:18.958001   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:18.968936   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:18.969004   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:18.979471   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:18.979550   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:18.989173   12861 logs.go:276] 0 containers: []
	W0318 04:11:18.989185   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:18.989240   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:18.999304   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:18.999317   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:18.999322   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:19.013163   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:19.013175   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:19.024739   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:19.024750   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:19.050065   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:19.050076   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:19.061975   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:19.061986   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:19.097215   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:19.097224   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:19.102175   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:19.102184   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:19.114840   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:19.114854   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:19.130888   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:19.130901   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:19.144011   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:19.144025   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:19.162627   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:19.162640   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:19.175695   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:19.175708   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:19.229549   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:19.229562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:19.100337   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:19.100412   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:19.111808   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:19.111893   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:19.123613   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:19.123685   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:19.142013   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:19.142089   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:19.153610   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:19.153689   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:19.165328   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:19.165409   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:19.177427   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:19.177497   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:19.189960   12818 logs.go:276] 0 containers: []
	W0318 04:11:19.189981   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:19.190044   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:19.202780   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:19.202798   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:19.202804   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:19.219815   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:19.219828   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:19.233808   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:19.233820   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:19.253135   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:19.253148   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:19.293560   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:19.293573   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:19.306209   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:19.306223   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:19.318138   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:19.318149   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:19.330106   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:19.330116   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:19.341750   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:19.341760   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:19.354129   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:19.354143   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:19.359032   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:19.359042   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:19.373610   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:19.373620   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:19.385052   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:19.385061   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:19.409749   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:19.409756   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:19.448173   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:19.448222   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:21.966148   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:21.774911   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:26.968272   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:26.968350   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:26.983462   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:26.983521   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:26.995078   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:26.995139   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:27.006594   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:27.006651   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:27.017802   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:27.017859   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:27.031673   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:27.031732   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:27.042731   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:27.042790   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:27.054273   12818 logs.go:276] 0 containers: []
	W0318 04:11:27.054284   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:27.054340   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:27.072026   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:27.072042   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:27.072047   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:27.087151   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:27.087167   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:27.106573   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:27.106584   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:27.118820   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:27.118829   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:27.145864   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:27.145875   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:27.185495   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:27.185507   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:27.200009   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:27.200021   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:27.211949   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:27.211962   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:27.223914   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:27.223925   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:27.262090   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:27.262104   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:27.280245   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:27.280257   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:27.297451   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:27.297462   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:27.309560   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:27.309571   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:27.314538   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:27.314548   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:27.326379   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:27.326391   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:26.776919   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:26.777035   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:26.791460   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:26.791681   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:26.804143   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:26.804199   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:26.819911   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:26.819968   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:26.830479   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:26.830531   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:26.843329   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:26.843399   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:26.853481   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:26.853533   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:26.867192   12861 logs.go:276] 0 containers: []
	W0318 04:11:26.867203   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:26.867264   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:26.877560   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:26.877577   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:26.877581   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:26.912552   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:26.912560   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:26.929398   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:26.929413   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:26.941281   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:26.941291   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:26.952558   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:26.952572   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:26.957408   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:26.957416   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:26.973439   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:26.973449   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:26.993143   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:26.993159   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:27.031191   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:27.031202   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:27.047798   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:27.047818   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:27.064132   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:27.064144   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:27.076243   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:27.076253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:27.088787   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:27.088799   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:27.101494   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:27.101505   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:27.117470   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:27.117485   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:29.646837   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:29.840392   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:34.647085   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:34.647363   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:34.672896   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:34.673005   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:34.691128   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:34.691204   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:34.704975   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:34.705055   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:34.715886   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:34.715963   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:34.725934   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:34.726008   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:34.736461   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:34.736530   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:34.751008   12861 logs.go:276] 0 containers: []
	W0318 04:11:34.751018   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:34.751075   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:34.762883   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:34.762901   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:34.762906   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:34.798643   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:34.798663   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:34.821880   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:34.821896   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:34.836120   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:34.836131   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:34.849186   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:34.849199   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:34.865432   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:34.865444   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:34.878095   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:34.878107   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:34.890803   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:34.890814   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:34.916838   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:34.916866   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:34.922562   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:34.922574   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:34.962359   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:34.962368   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:34.980744   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:34.980755   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:34.993824   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:34.993835   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:35.012758   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:35.012768   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:35.027252   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:35.027269   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:34.842422   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:34.842504   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:34.859004   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:34.859079   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:34.870837   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:34.870911   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:34.884122   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:34.884199   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:34.899131   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:34.899203   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:34.909749   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:34.909808   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:34.922828   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:34.922898   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:34.934152   12818 logs.go:276] 0 containers: []
	W0318 04:11:34.934165   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:34.934227   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:34.947816   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:34.947835   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:34.947840   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:34.961156   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:34.961167   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:34.977997   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:34.978010   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:34.991100   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:34.991116   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:35.030364   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:35.030377   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:35.035506   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:35.035520   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:35.071126   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:35.071137   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:35.086355   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:35.086368   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:35.098359   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:35.098369   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:35.113497   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:35.113508   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:35.125259   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:35.125273   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:35.149222   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:35.149232   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:35.162945   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:35.162955   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:35.174901   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:35.174910   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:35.186176   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:35.186187   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:37.706379   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:37.546759   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:42.707682   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:42.707775   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:42.719391   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:42.719459   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:42.731312   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:42.731385   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:42.743018   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:42.743096   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:42.754898   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:42.754972   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:42.766628   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:42.766700   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:42.778886   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:42.778955   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:42.790272   12818 logs.go:276] 0 containers: []
	W0318 04:11:42.790286   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:42.790351   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:42.801667   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:42.801686   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:42.801693   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:42.813957   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:42.813966   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:42.837449   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:42.837460   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:42.842349   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:42.842355   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:42.855321   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:42.855332   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:42.868582   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:42.868593   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:42.883287   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:42.883299   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:42.900332   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:42.900343   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:42.923163   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:42.923175   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:42.941292   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:42.941303   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:42.952638   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:42.952650   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:42.977304   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:42.977311   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:43.014532   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:43.014540   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:43.051580   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:43.051589   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:43.066167   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:43.066177   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:42.547532   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:42.547751   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:42.564395   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:42.564485   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:42.577751   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:42.577830   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:42.589007   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:42.589078   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:42.610408   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:42.610478   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:42.621359   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:42.621421   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:42.633242   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:42.633315   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:42.643559   12861 logs.go:276] 0 containers: []
	W0318 04:11:42.643568   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:42.643623   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:42.653803   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:42.653822   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:42.653827   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:42.665899   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:42.665912   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:42.681123   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:42.681133   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:42.716999   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:42.717011   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:42.732815   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:42.732826   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:42.745767   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:42.745780   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:42.758723   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:42.758733   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:42.763243   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:42.763253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:42.777395   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:42.777410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:42.793010   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:42.793019   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:42.812015   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:42.812029   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:42.826278   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:42.826291   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:42.863033   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:42.863052   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:42.876300   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:42.876311   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:42.889060   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:42.889072   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:45.417078   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:45.590258   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:50.419255   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:50.419625   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:50.457525   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:50.457659   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:50.475332   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:50.475433   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:50.489050   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:50.489146   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:50.500946   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:50.501020   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:50.511365   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:50.511434   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:50.527955   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:50.528034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:50.538512   12861 logs.go:276] 0 containers: []
	W0318 04:11:50.538526   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:50.538584   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:50.549338   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:50.549355   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:50.549361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:50.574064   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:50.574075   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:50.585853   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:50.585867   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:50.598031   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:50.598044   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:50.626193   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:50.626208   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:50.663705   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:50.663728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:50.678293   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:50.678304   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:50.691240   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:50.691252   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:50.695902   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:50.695913   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:50.708831   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:50.708843   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:50.731481   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:50.731494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:50.748661   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:50.748674   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:50.764088   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:50.764100   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:50.784365   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:50.784381   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:50.799830   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:50.799842   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:50.592313   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:50.592420   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:50.603759   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:50.603834   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:50.615359   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:50.615437   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:50.626500   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:50.626578   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:50.641786   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:50.641856   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:50.653404   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:50.653473   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:50.665244   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:50.665311   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:50.675629   12818 logs.go:276] 0 containers: []
	W0318 04:11:50.675644   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:50.675714   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:50.687651   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:50.687669   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:50.687674   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:50.703290   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:50.703306   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:50.718798   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:50.718810   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:50.737587   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:50.737600   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:50.777245   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:50.777258   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:50.790107   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:50.790120   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:50.807055   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:50.807068   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:50.819757   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:50.819770   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:50.833878   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:50.833890   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:11:50.846907   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:50.846919   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:50.852154   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:50.852162   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:50.866946   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:50.866957   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:50.878903   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:50.878914   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:50.915024   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:50.915036   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:50.929686   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:50.929695   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:53.349604   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:53.454897   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:58.351667   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:58.351814   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:58.363545   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:58.363621   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:58.374159   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:58.374233   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:58.385001   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:58.385083   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:58.395769   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:58.395849   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:58.406941   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:58.407013   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:58.420644   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:58.420714   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:58.431335   12861 logs.go:276] 0 containers: []
	W0318 04:11:58.431347   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:58.431408   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:58.443348   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:58.443364   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:58.443369   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:58.455123   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:58.455132   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:58.473316   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:58.473328   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:58.486352   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:58.486365   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:58.499199   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:58.499211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:58.515444   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:58.515458   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:58.552847   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:58.552860   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:58.568980   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:58.568993   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:58.581203   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:58.581215   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:58.594582   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:58.594592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:58.609322   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:58.609329   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:58.635919   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:58.635929   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:58.652817   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:58.652829   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:58.690799   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:58.690810   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:58.695596   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:58.695607   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:01.213024   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:58.454971   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:58.455064   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:58.466343   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:11:58.466416   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:58.477869   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:11:58.477943   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:58.489299   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:11:58.489399   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:58.500487   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:11:58.500553   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:58.511885   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:11:58.511962   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:58.523385   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:11:58.523466   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:58.534601   12818 logs.go:276] 0 containers: []
	W0318 04:11:58.534614   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:58.534677   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:58.552978   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:11:58.552991   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:58.552995   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:58.593071   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:11:58.593093   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:11:58.609303   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:11:58.609315   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:11:58.622099   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:11:58.622111   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:11:58.634370   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:11:58.634380   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:11:58.653557   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:58.653564   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:58.658567   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:11:58.658578   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:11:58.676728   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:11:58.676738   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:11:58.688239   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:11:58.688253   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:11:58.701321   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:58.701333   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:58.737657   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:11:58.737667   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:11:58.751702   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:58.751712   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:58.775667   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:11:58.775676   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:58.787116   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:11:58.787127   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:11:58.798897   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:11:58.798908   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:01.312072   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:06.215179   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:06.215313   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:06.226646   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:06.226710   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:06.246319   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:06.246397   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:06.262669   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:06.262748   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:06.282823   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:06.282900   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:06.298846   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:06.298909   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:06.309420   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:06.309492   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:06.320617   12861 logs.go:276] 0 containers: []
	W0318 04:12:06.320633   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:06.320694   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:06.331904   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:06.331925   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:06.331931   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:06.358678   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:06.358693   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:06.398056   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:06.398070   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:06.417327   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:06.417345   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:06.430658   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:06.430669   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:06.450780   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:06.450790   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:06.463679   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:06.463690   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:06.483743   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:06.483755   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:06.314131   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:06.314200   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:06.325841   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:06.325917   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:06.337687   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:06.337768   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:06.349170   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:06.349247   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:06.360358   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:06.360430   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:06.376241   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:06.376312   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:06.387668   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:06.387742   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:06.399052   12818 logs.go:276] 0 containers: []
	W0318 04:12:06.399061   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:06.399123   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:06.410487   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:06.410507   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:06.410513   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:06.423362   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:06.423374   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:06.436795   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:06.436806   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:06.448858   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:06.448871   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:06.464503   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:06.464511   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:06.479549   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:06.479559   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:06.491792   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:06.491806   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:06.504717   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:06.504729   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:06.526094   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:06.526106   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:06.553504   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:06.553518   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:06.594384   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:06.594394   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:06.632230   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:06.632244   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:06.645146   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:06.645157   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:06.656810   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:06.656821   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:06.661471   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:06.661478   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:06.497110   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:06.498410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:06.516296   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:06.516309   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:06.529091   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:06.529101   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:06.548783   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:06.548797   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:06.564201   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:06.564212   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:06.580892   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:06.580902   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:06.616564   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:06.616580   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:09.123484   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:09.178939   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:14.125537   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:14.125726   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:14.141593   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:14.141686   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:14.153678   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:14.153744   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:14.164642   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:14.164704   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:14.182749   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:14.182809   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:14.193958   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:14.194026   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:14.205902   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:14.205977   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:14.216966   12861 logs.go:276] 0 containers: []
	W0318 04:12:14.216977   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:14.217038   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:14.232136   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:14.232157   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:14.232163   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:14.247041   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:14.247050   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:14.263908   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:14.263918   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:14.301243   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:14.301253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:14.320942   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:14.320951   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:14.333590   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:14.333600   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:14.350051   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:14.350063   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:14.363340   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:14.363351   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:14.401066   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:14.401079   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:14.424053   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:14.424066   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:14.450447   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:14.450461   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:14.463166   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:14.463181   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:14.475948   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:14.475960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:14.489628   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:14.489643   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:14.502517   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:14.502530   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:14.180901   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:14.181023   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:14.197325   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:14.197417   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:14.210827   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:14.210899   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:14.222595   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:14.222668   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:14.234191   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:14.234262   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:14.246357   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:14.246422   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:14.258420   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:14.258488   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:14.269527   12818 logs.go:276] 0 containers: []
	W0318 04:12:14.269538   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:14.269596   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:14.293308   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:14.293323   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:14.293328   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:14.318265   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:14.318277   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:14.331077   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:14.331089   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:14.356536   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:14.356549   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:14.369445   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:14.369458   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:14.381631   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:14.381642   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:14.394531   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:14.394545   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:14.413388   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:14.413404   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:14.425875   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:14.425884   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:14.430878   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:14.430889   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:14.469163   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:14.469176   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:14.485550   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:14.485567   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:14.497889   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:14.497901   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:14.536988   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:14.537001   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:14.555495   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:14.555505   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:17.068844   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:17.008817   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:22.070959   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:22.071057   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:22.082686   12818 logs.go:276] 1 containers: [4012abac166b]
	I0318 04:12:22.082757   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:22.097198   12818 logs.go:276] 1 containers: [d13df83a08b8]
	I0318 04:12:22.097272   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:22.108190   12818 logs.go:276] 4 containers: [3bec3521a2b1 dbdadcab84ec c4569429f40f 9e480ed55df6]
	I0318 04:12:22.108233   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:22.120414   12818 logs.go:276] 1 containers: [20bc7ccd4a6e]
	I0318 04:12:22.120490   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:22.132335   12818 logs.go:276] 1 containers: [3261abcf3680]
	I0318 04:12:22.132407   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:22.143872   12818 logs.go:276] 1 containers: [2173735035df]
	I0318 04:12:22.143940   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:22.158082   12818 logs.go:276] 0 containers: []
	W0318 04:12:22.158093   12818 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:22.158153   12818 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:22.169613   12818 logs.go:276] 1 containers: [09c11be20dfd]
	I0318 04:12:22.169630   12818 logs.go:123] Gathering logs for coredns [9e480ed55df6] ...
	I0318 04:12:22.169636   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e480ed55df6"
	I0318 04:12:22.182160   12818 logs.go:123] Gathering logs for kube-proxy [3261abcf3680] ...
	I0318 04:12:22.182171   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3261abcf3680"
	I0318 04:12:22.194998   12818 logs.go:123] Gathering logs for storage-provisioner [09c11be20dfd] ...
	I0318 04:12:22.195008   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c11be20dfd"
	I0318 04:12:22.207902   12818 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:22.207915   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:22.247726   12818 logs.go:123] Gathering logs for coredns [c4569429f40f] ...
	I0318 04:12:22.247740   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4569429f40f"
	I0318 04:12:22.262890   12818 logs.go:123] Gathering logs for kube-controller-manager [2173735035df] ...
	I0318 04:12:22.262902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2173735035df"
	I0318 04:12:22.280884   12818 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:22.280892   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:22.318913   12818 logs.go:123] Gathering logs for coredns [dbdadcab84ec] ...
	I0318 04:12:22.318926   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdadcab84ec"
	I0318 04:12:22.331815   12818 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:22.331824   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:22.356409   12818 logs.go:123] Gathering logs for container status ...
	I0318 04:12:22.356426   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:22.369300   12818 logs.go:123] Gathering logs for etcd [d13df83a08b8] ...
	I0318 04:12:22.369318   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d13df83a08b8"
	I0318 04:12:22.392322   12818 logs.go:123] Gathering logs for coredns [3bec3521a2b1] ...
	I0318 04:12:22.392334   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bec3521a2b1"
	I0318 04:12:22.405967   12818 logs.go:123] Gathering logs for kube-scheduler [20bc7ccd4a6e] ...
	I0318 04:12:22.405982   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc7ccd4a6e"
	I0318 04:12:22.422695   12818 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:22.422711   12818 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:22.427890   12818 logs.go:123] Gathering logs for kube-apiserver [4012abac166b] ...
	I0318 04:12:22.427902   12818 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4012abac166b"
	I0318 04:12:22.010183   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:22.010427   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:22.034137   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:22.034221   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:22.050802   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:22.050876   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:22.062940   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:22.063012   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:22.073582   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:22.073660   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:22.085336   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:22.085441   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:22.106977   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:22.107045   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:22.118781   12861 logs.go:276] 0 containers: []
	W0318 04:12:22.118792   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:22.118850   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:22.130557   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:22.130574   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:22.130580   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:22.147019   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:22.147028   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:22.159352   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:22.159362   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:22.196196   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:22.196206   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:22.201007   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:22.201019   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:22.213970   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:22.213982   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:22.251579   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:22.251589   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:22.279871   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:22.279886   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:22.295357   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:22.295367   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:22.312057   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:22.312069   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:22.331489   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:22.331502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:22.344263   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:22.344274   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:22.357725   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:22.357734   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:22.400252   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:22.400265   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:22.416507   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:22.416520   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:24.931405   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:24.944912   12818 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:29.947066   12818 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:29.951664   12818 out.go:177] 
	W0318 04:12:29.955509   12818 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:12:29.955524   12818 out.go:239] * 
	W0318 04:12:29.956741   12818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:12:29.963494   12818 out.go:177] 
	I0318 04:12:29.933397   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:29.933719   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:29.964603   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:29.964718   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:29.983760   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:29.983850   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:30.006004   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:30.006161   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:30.022759   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:30.022866   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:30.038479   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:30.038557   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:30.050642   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:30.050705   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:30.063051   12861 logs.go:276] 0 containers: []
	W0318 04:12:30.063061   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:30.063106   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:30.074558   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:30.074575   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:30.074580   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:30.091200   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:30.091211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:30.110540   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:30.110551   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:30.146944   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:30.146956   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:30.172679   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:30.172691   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:30.184052   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:30.184066   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:30.198404   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:30.198418   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:30.210855   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:30.210868   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:30.224710   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:30.224725   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:30.240110   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:30.240121   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:30.245232   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:30.245241   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:30.258594   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:30.258603   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:30.270318   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:30.270333   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:30.282283   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:30.282297   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:30.298523   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:30.298533   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:32.834535   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:37.835760   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:37.835934   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:37.856105   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:37.856205   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:37.871153   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:37.871231   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:37.882833   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:37.882904   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:37.893324   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:37.893392   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:37.905009   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:37.905077   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:37.916240   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:37.916305   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:37.927054   12861 logs.go:276] 0 containers: []
	W0318 04:12:37.927068   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:37.927132   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:37.937914   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:37.937931   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:37.937937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:37.949837   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:37.949853   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:37.965118   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:37.965128   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:37.976261   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:37.976272   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:38.007361   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:38.007376   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:38.021698   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:38.021709   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:38.039978   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:38.039989   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:38.051377   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:38.051387   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:38.076185   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:38.076194   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:38.111273   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:38.111284   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:38.146766   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:38.146777   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:38.159777   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:38.159788   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:38.171693   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:38.171704   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:38.176571   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:38.176581   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:38.188230   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:38.188241   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:40.702389   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-18 11:03:21 UTC, ends at Mon 2024-03-18 11:12:46 UTC. --
	Mar 18 11:12:30 running-upgrade-727000 dockerd[3240]: time="2024-03-18T11:12:30.806315795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:12:30 running-upgrade-727000 dockerd[3240]: time="2024-03-18T11:12:30.806345168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:12:30 running-upgrade-727000 dockerd[3240]: time="2024-03-18T11:12:30.806354834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:12:30 running-upgrade-727000 dockerd[3240]: time="2024-03-18T11:12:30.806418913Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/42c759ff3154524af3a69441143b6d0b1120f3a0eb7606cba7715c50adac1367 pid=18624 runtime=io.containerd.runc.v2
	Mar 18 11:12:31 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:31Z" level=error msg="ContainerStats resp: {0x40009b7e40 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x4000384fc0 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x4000385100 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x4000385940 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x4000385ac0 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x4000385c00 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x40008fe440 linux}"
	Mar 18 11:12:32 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:32Z" level=error msg="ContainerStats resp: {0x40008fed80 linux}"
	Mar 18 11:12:33 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:12:38 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:12:42 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:42Z" level=error msg="ContainerStats resp: {0x400060f140 linux}"
	Mar 18 11:12:42 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:42Z" level=error msg="ContainerStats resp: {0x40009b7300 linux}"
	Mar 18 11:12:43 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:43Z" level=error msg="ContainerStats resp: {0x4000384d00 linux}"
	Mar 18 11:12:43 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x4000385dc0 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x4000673b40 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x400039fdc0 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x40004fcdc0 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x4000846680 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x4000846840 linux}"
	Mar 18 11:12:44 running-upgrade-727000 cri-dockerd[3080]: time="2024-03-18T11:12:44Z" level=error msg="ContainerStats resp: {0x4000846e00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	67fd9c5816b61       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   8cf5b48656a64
	42c759ff31545       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   b6405692f54ce
	3bec3521a2b13       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b6405692f54ce
	dbdadcab84ec4       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8cf5b48656a64
	09c11be20dfde       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   181ff366a1ea5
	3261abcf3680c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   8e9db7f59666e
	d13df83a08b8a       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   7b22d0ee013ce
	2173735035df0       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   6e3d8992c4f49
	4012abac166bc       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f1f8362c3b117
	20bc7ccd4a6e8       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   21e117de3f3ba
	
	
	==> coredns [3bec3521a2b1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:54010->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:59922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:59042->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:47393->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:36301->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:37410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:56516->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:42914->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:58398->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6781724532925513495.9223146503956765511. HINFO: read udp 10.244.0.2:53763->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42c759ff3154] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5159920489046152536.6953974451178151523. HINFO: read udp 10.244.0.2:51693->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5159920489046152536.6953974451178151523. HINFO: read udp 10.244.0.2:54384->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5159920489046152536.6953974451178151523. HINFO: read udp 10.244.0.2:45294->10.0.2.3:53: i/o timeout
	
	
	==> coredns [67fd9c5816b6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3206324082228515162.7856511310276378385. HINFO: read udp 10.244.0.3:43386->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3206324082228515162.7856511310276378385. HINFO: read udp 10.244.0.3:41097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3206324082228515162.7856511310276378385. HINFO: read udp 10.244.0.3:35339->10.0.2.3:53: i/o timeout
	
	
	==> coredns [dbdadcab84ec] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:59949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:42909->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:49247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:58734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:45402->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:50597->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:39732->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:58253->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:34700->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3114266508862552477.3211156896832427119. HINFO: read udp 10.244.0.3:45918->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-727000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-727000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd
	                    minikube.k8s.io/name=running-upgrade-727000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T04_08_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:08:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-727000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:12:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:08:29 +0000   Mon, 18 Mar 2024 11:08:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:08:29 +0000   Mon, 18 Mar 2024 11:08:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:08:29 +0000   Mon, 18 Mar 2024 11:08:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:08:29 +0000   Mon, 18 Mar 2024 11:08:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-727000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 144dbb932adc4ad09da8b5718387cd1a
	  System UUID:                144dbb932adc4ad09da8b5718387cd1a
	  Boot ID:                    945bfe60-365d-47b5-a806-3fdc78c76154
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-76m4s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-xxrbs                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-727000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-727000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-727000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-m7qfh                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-727000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m24s)  kubelet          Node running-upgrade-727000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m24s)  kubelet          Node running-upgrade-727000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m24s)  kubelet          Node running-upgrade-727000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-727000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-727000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-727000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-727000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-727000 event: Registered Node running-upgrade-727000 in Controller
	
	
	==> dmesg <==
	[  +1.790287] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.087380] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.079676] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.142295] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.089125] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.097131] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.314444] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +13.667461] systemd-fstab-generator[1952]: Ignoring "noauto" for root device
	[  +2.840647] systemd-fstab-generator[2232]: Ignoring "noauto" for root device
	[  +0.192102] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.094456] systemd-fstab-generator[2282]: Ignoring "noauto" for root device
	[  +0.098830] systemd-fstab-generator[2295]: Ignoring "noauto" for root device
	[Mar18 11:04] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.199299] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +0.073579] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +0.093329] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
	[  +0.091493] systemd-fstab-generator[3073]: Ignoring "noauto" for root device
	[  +2.319132] systemd-fstab-generator[3227]: Ignoring "noauto" for root device
	[  +5.249427] systemd-fstab-generator[3593]: Ignoring "noauto" for root device
	[  +1.002567] systemd-fstab-generator[3718]: Ignoring "noauto" for root device
	[ +18.218503] kauditd_printk_skb: 68 callbacks suppressed
	[Mar18 11:08] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.355353] systemd-fstab-generator[11902]: Ignoring "noauto" for root device
	[  +6.128844] systemd-fstab-generator[12527]: Ignoring "noauto" for root device
	[  +0.473636] systemd-fstab-generator[12661]: Ignoring "noauto" for root device
	
	
	==> etcd [d13df83a08b8] <==
	{"level":"info","ts":"2024-03-18T11:08:24.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-18T11:08:24.047Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-18T11:08:24.072Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T11:08:24.079Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T11:08:24.072Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T11:08:24.079Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T11:08:24.079Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T11:08:24.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T11:08:24.919Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-727000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T11:08:24.919Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:08:24.920Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:08:24.921Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T11:08:24.921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T11:08:24.921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:08:24.921Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:08:24.922Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:08:24.922Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:08:24.923Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-18T11:08:24.924Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:12:46 up 9 min,  0 users,  load average: 0.19, 0.34, 0.21
	Linux running-upgrade-727000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4012abac166b] <==
	I0318 11:08:26.170436       1 controller.go:611] quota admission added evaluator for: namespaces
	I0318 11:08:26.210194       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 11:08:26.210215       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 11:08:26.210225       1 cache.go:39] Caches are synced for autoregister controller
	I0318 11:08:26.210197       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 11:08:26.218123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 11:08:26.226965       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0318 11:08:26.946361       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 11:08:27.120670       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 11:08:27.123972       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 11:08:27.124011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 11:08:27.282685       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 11:08:27.292716       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 11:08:27.375567       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0318 11:08:27.377488       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0318 11:08:27.377840       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 11:08:27.379571       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 11:08:28.242401       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 11:08:28.973178       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 11:08:28.976570       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0318 11:08:28.982878       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 11:08:29.038105       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 11:08:42.258205       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0318 11:08:42.559282       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0318 11:08:42.810938       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [2173735035df] <==
	I0318 11:08:41.656530       1 shared_informer.go:262] Caches are synced for TTL
	I0318 11:08:41.656564       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0318 11:08:41.656591       1 shared_informer.go:262] Caches are synced for ephemeral
	I0318 11:08:41.657791       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0318 11:08:41.661097       1 shared_informer.go:262] Caches are synced for namespace
	I0318 11:08:41.664240       1 shared_informer.go:262] Caches are synced for node
	I0318 11:08:41.664253       1 range_allocator.go:173] Starting range CIDR allocator
	I0318 11:08:41.664254       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0318 11:08:41.664257       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0318 11:08:41.667481       1 range_allocator.go:374] Set node running-upgrade-727000 PodCIDR to [10.244.0.0/24]
	I0318 11:08:41.669616       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0318 11:08:41.776046       1 shared_informer.go:262] Caches are synced for attach detach
	I0318 11:08:41.807185       1 shared_informer.go:262] Caches are synced for HPA
	I0318 11:08:41.851069       1 shared_informer.go:262] Caches are synced for disruption
	I0318 11:08:41.851083       1 disruption.go:371] Sending events to api server.
	I0318 11:08:41.859732       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 11:08:41.859750       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 11:08:41.907243       1 shared_informer.go:262] Caches are synced for deployment
	I0318 11:08:42.260969       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m7qfh"
	I0318 11:08:42.277381       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 11:08:42.355702       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 11:08:42.355713       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 11:08:42.561550       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0318 11:08:42.659555       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xxrbs"
	I0318 11:08:42.663492       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-76m4s"
	
	
	==> kube-proxy [3261abcf3680] <==
	I0318 11:08:42.799135       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0318 11:08:42.799167       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0318 11:08:42.799179       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 11:08:42.809083       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 11:08:42.809094       1 server_others.go:206] "Using iptables Proxier"
	I0318 11:08:42.809109       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 11:08:42.809201       1 server.go:661] "Version info" version="v1.24.1"
	I0318 11:08:42.809204       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:08:42.809426       1 config.go:317] "Starting service config controller"
	I0318 11:08:42.809433       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 11:08:42.809442       1 config.go:226] "Starting endpoint slice config controller"
	I0318 11:08:42.809444       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 11:08:42.809855       1 config.go:444] "Starting node config controller"
	I0318 11:08:42.809882       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 11:08:42.909805       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0318 11:08:42.909809       1 shared_informer.go:262] Caches are synced for service config
	I0318 11:08:42.910000       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [20bc7ccd4a6e] <==
	W0318 11:08:26.165494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:08:26.165498       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 11:08:26.165514       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:08:26.165518       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:08:26.165534       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 11:08:26.165540       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:08:26.165556       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:08:26.165576       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 11:08:26.165593       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 11:08:26.165597       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 11:08:26.165612       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:08:26.165615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:08:26.165631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:08:26.165638       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:08:26.165661       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:08:26.165669       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:08:27.015413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 11:08:27.015440       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 11:08:27.015766       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:08:27.015783       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:08:27.028422       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:08:27.028437       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:08:27.257793       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 11:08:27.257810       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 11:08:29.464908       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-18 11:03:21 UTC, ends at Mon 2024-03-18 11:12:46 UTC. --
	Mar 18 11:08:30 running-upgrade-727000 kubelet[12533]: E0318 11:08:30.816777   12533 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-727000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-727000"
	Mar 18 11:08:31 running-upgrade-727000 kubelet[12533]: E0318 11:08:31.019589   12533 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-727000\" already exists" pod="kube-system/etcd-running-upgrade-727000"
	Mar 18 11:08:31 running-upgrade-727000 kubelet[12533]: I0318 11:08:31.215592   12533 request.go:601] Waited for 1.1158039s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 18 11:08:31 running-upgrade-727000 kubelet[12533]: E0318 11:08:31.220775   12533 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-727000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-727000"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: I0318 11:08:41.597233   12533 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: I0318 11:08:41.753410   12533 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: I0318 11:08:41.753466   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvxqg\" (UniqueName: \"kubernetes.io/projected/2d031a78-2049-4b6e-8cde-b85d60b47966-kube-api-access-vvxqg\") pod \"storage-provisioner\" (UID: \"2d031a78-2049-4b6e-8cde-b85d60b47966\") " pod="kube-system/storage-provisioner"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: I0318 11:08:41.753484   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d031a78-2049-4b6e-8cde-b85d60b47966-tmp\") pod \"storage-provisioner\" (UID: \"2d031a78-2049-4b6e-8cde-b85d60b47966\") " pod="kube-system/storage-provisioner"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: I0318 11:08:41.753883   12533 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: E0318 11:08:41.857812   12533 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: E0318 11:08:41.857833   12533 projected.go:192] Error preparing data for projected volume kube-api-access-vvxqg for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 18 11:08:41 running-upgrade-727000 kubelet[12533]: E0318 11:08:41.857868   12533 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2d031a78-2049-4b6e-8cde-b85d60b47966-kube-api-access-vvxqg podName:2d031a78-2049-4b6e-8cde-b85d60b47966 nodeName:}" failed. No retries permitted until 2024-03-18 11:08:42.35785566 +0000 UTC m=+13.394321536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvxqg" (UniqueName: "kubernetes.io/projected/2d031a78-2049-4b6e-8cde-b85d60b47966-kube-api-access-vvxqg") pod "storage-provisioner" (UID: "2d031a78-2049-4b6e-8cde-b85d60b47966") : configmap "kube-root-ca.crt" not found
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.263954   12533 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.364183   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06eb67d2-c2c8-47f7-8e02-81c01c9b1139-lib-modules\") pod \"kube-proxy-m7qfh\" (UID: \"06eb67d2-c2c8-47f7-8e02-81c01c9b1139\") " pod="kube-system/kube-proxy-m7qfh"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.364209   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06eb67d2-c2c8-47f7-8e02-81c01c9b1139-kube-proxy\") pod \"kube-proxy-m7qfh\" (UID: \"06eb67d2-c2c8-47f7-8e02-81c01c9b1139\") " pod="kube-system/kube-proxy-m7qfh"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.364232   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06eb67d2-c2c8-47f7-8e02-81c01c9b1139-xtables-lock\") pod \"kube-proxy-m7qfh\" (UID: \"06eb67d2-c2c8-47f7-8e02-81c01c9b1139\") " pod="kube-system/kube-proxy-m7qfh"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.364242   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj9s\" (UniqueName: \"kubernetes.io/projected/06eb67d2-c2c8-47f7-8e02-81c01c9b1139-kube-api-access-hcj9s\") pod \"kube-proxy-m7qfh\" (UID: \"06eb67d2-c2c8-47f7-8e02-81c01c9b1139\") " pod="kube-system/kube-proxy-m7qfh"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.662967   12533 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.665105   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwfwd\" (UniqueName: \"kubernetes.io/projected/e350fac8-784c-4b73-a9a8-8e9427f05f0f-kube-api-access-qwfwd\") pod \"coredns-6d4b75cb6d-xxrbs\" (UID: \"e350fac8-784c-4b73-a9a8-8e9427f05f0f\") " pod="kube-system/coredns-6d4b75cb6d-xxrbs"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.665122   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e350fac8-784c-4b73-a9a8-8e9427f05f0f-config-volume\") pod \"coredns-6d4b75cb6d-xxrbs\" (UID: \"e350fac8-784c-4b73-a9a8-8e9427f05f0f\") " pod="kube-system/coredns-6d4b75cb6d-xxrbs"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.667788   12533 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.869882   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/467aae21-460d-4ffd-8f59-41e248000536-config-volume\") pod \"coredns-6d4b75cb6d-76m4s\" (UID: \"467aae21-460d-4ffd-8f59-41e248000536\") " pod="kube-system/coredns-6d4b75cb6d-76m4s"
	Mar 18 11:08:42 running-upgrade-727000 kubelet[12533]: I0318 11:08:42.869914   12533 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88n4\" (UniqueName: \"kubernetes.io/projected/467aae21-460d-4ffd-8f59-41e248000536-kube-api-access-p88n4\") pod \"coredns-6d4b75cb6d-76m4s\" (UID: \"467aae21-460d-4ffd-8f59-41e248000536\") " pod="kube-system/coredns-6d4b75cb6d-76m4s"
	Mar 18 11:12:31 running-upgrade-727000 kubelet[12533]: I0318 11:12:31.263187   12533 scope.go:110] "RemoveContainer" containerID="c4569429f40f21bbbefd6a509c97a01ab2226411b249fb23ed2449ea41d82f61"
	Mar 18 11:12:31 running-upgrade-727000 kubelet[12533]: I0318 11:12:31.281808   12533 scope.go:110] "RemoveContainer" containerID="9e480ed55df64b18d2df30cdc15b05f6bdc8f7e19efdfb529cfdb791e8ba4a7c"
	
	
	==> storage-provisioner [09c11be20dfd] <==
	I0318 11:08:42.771252       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 11:08:42.778924       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 11:08:42.779039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 11:08:42.782386       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 11:08:42.782513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-727000_97374f7f-f58b-476b-9327-e5bed85d586a!
	I0318 11:08:42.782911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00ac3d0d-24e5-4c45-bfad-5d2eb7f1dc2f", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-727000_97374f7f-f58b-476b-9327-e5bed85d586a became leader
	I0318 11:08:42.883326       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-727000_97374f7f-f58b-476b-9327-e5bed85d586a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-727000 -n running-upgrade-727000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-727000 -n running-upgrade-727000: exit status 2 (15.68838675s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-727000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-727000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-727000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-727000: (2.305113333s)
--- FAIL: TestRunningBinaryUpgrade (625.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.849812834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-294000" primary control-plane node in "kubernetes-upgrade-294000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:02:20.096074   12718 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:02:20.096192   12718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:20.096195   12718 out.go:304] Setting ErrFile to fd 2...
	I0318 04:02:20.096198   12718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:20.096320   12718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:02:20.097392   12718 out.go:298] Setting JSON to false
	I0318 04:02:20.113527   12718 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7313,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:02:20.113625   12718 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:02:20.117290   12718 out.go:177] * [kubernetes-upgrade-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:02:20.133180   12718 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:02:20.137148   12718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:02:20.133206   12718 notify.go:220] Checking for updates...
	I0318 04:02:20.145971   12718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:02:20.149137   12718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:02:20.152180   12718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:02:20.155190   12718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:02:20.158586   12718 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:02:20.158663   12718 config.go:182] Loaded profile config "offline-docker-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:02:20.158706   12718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:02:20.163150   12718 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:02:20.170069   12718 start.go:297] selected driver: qemu2
	I0318 04:02:20.170076   12718 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:02:20.170084   12718 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:02:20.172571   12718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:02:20.176124   12718 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:02:20.179199   12718 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:02:20.179240   12718 cni.go:84] Creating CNI manager for ""
	I0318 04:02:20.179252   12718 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:02:20.179294   12718 start.go:340] cluster config:
	{Name:kubernetes-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:02:20.184192   12718 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:02:20.192178   12718 out.go:177] * Starting "kubernetes-upgrade-294000" primary control-plane node in "kubernetes-upgrade-294000" cluster
	I0318 04:02:20.196120   12718 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:02:20.196141   12718 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:02:20.196164   12718 cache.go:56] Caching tarball of preloaded images
	I0318 04:02:20.196230   12718 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:02:20.196238   12718 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:02:20.196319   12718 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kubernetes-upgrade-294000/config.json ...
	I0318 04:02:20.196331   12718 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kubernetes-upgrade-294000/config.json: {Name:mke8dad316ec50eba161a92dc17bca09ac824a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:02:20.196580   12718 start.go:360] acquireMachinesLock for kubernetes-upgrade-294000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:20.196617   12718 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "kubernetes-upgrade-294000"
	I0318 04:02:20.196632   12718 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:02:20.196668   12718 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:02:20.201165   12718 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:02:20.219825   12718 start.go:159] libmachine.API.Create for "kubernetes-upgrade-294000" (driver="qemu2")
	I0318 04:02:20.219849   12718 client.go:168] LocalClient.Create starting
	I0318 04:02:20.219934   12718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:02:20.219969   12718 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:20.219984   12718 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:20.220034   12718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:02:20.220059   12718 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:20.220072   12718 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:20.220492   12718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:02:20.361969   12718 main.go:141] libmachine: Creating SSH key...
	I0318 04:02:20.477120   12718 main.go:141] libmachine: Creating Disk image...
	I0318 04:02:20.477126   12718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:02:20.477306   12718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:20.489844   12718 main.go:141] libmachine: STDOUT: 
	I0318 04:02:20.489863   12718 main.go:141] libmachine: STDERR: 
	I0318 04:02:20.489913   12718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2 +20000M
	I0318 04:02:20.500689   12718 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:02:20.500706   12718 main.go:141] libmachine: STDERR: 
	I0318 04:02:20.500721   12718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:20.500725   12718 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:02:20.500765   12718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:78:d7:c6:d0:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:20.502536   12718 main.go:141] libmachine: STDOUT: 
	I0318 04:02:20.502553   12718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:20.502570   12718 client.go:171] duration metric: took 282.722042ms to LocalClient.Create
	I0318 04:02:22.504744   12718 start.go:128] duration metric: took 2.308109708s to createHost
	I0318 04:02:22.504825   12718 start.go:83] releasing machines lock for "kubernetes-upgrade-294000", held for 2.308255209s
	W0318 04:02:22.504935   12718 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:22.521019   12718 out.go:177] * Deleting "kubernetes-upgrade-294000" in qemu2 ...
	W0318 04:02:22.546099   12718 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:22.546124   12718 start.go:728] Will try again in 5 seconds ...
	I0318 04:02:27.548050   12718 start.go:360] acquireMachinesLock for kubernetes-upgrade-294000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:27.548123   12718 start.go:364] duration metric: took 51.125µs to acquireMachinesLock for "kubernetes-upgrade-294000"
	I0318 04:02:27.548140   12718 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:02:27.548215   12718 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:02:27.556257   12718 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:02:27.571498   12718 start.go:159] libmachine.API.Create for "kubernetes-upgrade-294000" (driver="qemu2")
	I0318 04:02:27.571524   12718 client.go:168] LocalClient.Create starting
	I0318 04:02:27.571582   12718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:02:27.571606   12718 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:27.571615   12718 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:27.571660   12718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:02:27.571678   12718 main.go:141] libmachine: Decoding PEM data...
	I0318 04:02:27.571683   12718 main.go:141] libmachine: Parsing certificate...
	I0318 04:02:27.572023   12718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:02:27.739376   12718 main.go:141] libmachine: Creating SSH key...
	I0318 04:02:27.846911   12718 main.go:141] libmachine: Creating Disk image...
	I0318 04:02:27.846917   12718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:02:27.847105   12718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:27.859711   12718 main.go:141] libmachine: STDOUT: 
	I0318 04:02:27.859737   12718 main.go:141] libmachine: STDERR: 
	I0318 04:02:27.859794   12718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2 +20000M
	I0318 04:02:27.870561   12718 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:02:27.870585   12718 main.go:141] libmachine: STDERR: 
	I0318 04:02:27.870600   12718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:27.870605   12718 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:02:27.870638   12718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:5b:d1:ba:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:27.872403   12718 main.go:141] libmachine: STDOUT: 
	I0318 04:02:27.872419   12718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:27.872431   12718 client.go:171] duration metric: took 300.911458ms to LocalClient.Create
	I0318 04:02:29.874559   12718 start.go:128] duration metric: took 2.326383625s to createHost
	I0318 04:02:29.874632   12718 start.go:83] releasing machines lock for "kubernetes-upgrade-294000", held for 2.326556584s
	W0318 04:02:29.874986   12718 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:29.882797   12718 out.go:177] 
	W0318 04:02:29.889818   12718 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:02:29.889848   12718 out.go:239] * 
	* 
	W0318 04:02:29.892412   12718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:02:29.901726   12718 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-294000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-294000: (3.785726542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-294000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-294000 status --format={{.Host}}: exit status 7 (67.329167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.186991s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-294000" primary control-plane node in "kubernetes-upgrade-294000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:02:33.804109   12767 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:02:33.804234   12767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:33.804237   12767 out.go:304] Setting ErrFile to fd 2...
	I0318 04:02:33.804240   12767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:02:33.804355   12767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:02:33.805374   12767 out.go:298] Setting JSON to false
	I0318 04:02:33.821518   12767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7326,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:02:33.821588   12767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:02:33.826369   12767 out.go:177] * [kubernetes-upgrade-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:02:33.833341   12767 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:02:33.833370   12767 notify.go:220] Checking for updates...
	I0318 04:02:33.840375   12767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:02:33.843299   12767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:02:33.846243   12767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:02:33.849267   12767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:02:33.852299   12767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:02:33.855611   12767 config.go:182] Loaded profile config "kubernetes-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:02:33.855864   12767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:02:33.860234   12767 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:02:33.867243   12767 start.go:297] selected driver: qemu2
	I0318 04:02:33.867248   12767 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:02:33.867291   12767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:02:33.869555   12767 cni.go:84] Creating CNI manager for ""
	I0318 04:02:33.869572   12767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:02:33.869600   12767 start.go:340] cluster config:
	{Name:kubernetes-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-294000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:02:33.873985   12767 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:02:33.882120   12767 out.go:177] * Starting "kubernetes-upgrade-294000" primary control-plane node in "kubernetes-upgrade-294000" cluster
	I0318 04:02:33.886315   12767 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:02:33.886330   12767 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:02:33.886350   12767 cache.go:56] Caching tarball of preloaded images
	I0318 04:02:33.886405   12767 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:02:33.886410   12767 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:02:33.886468   12767 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kubernetes-upgrade-294000/config.json ...
	I0318 04:02:33.886973   12767 start.go:360] acquireMachinesLock for kubernetes-upgrade-294000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:33.887006   12767 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "kubernetes-upgrade-294000"
	I0318 04:02:33.887016   12767 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:02:33.887021   12767 fix.go:54] fixHost starting: 
	I0318 04:02:33.887141   12767 fix.go:112] recreateIfNeeded on kubernetes-upgrade-294000: state=Stopped err=<nil>
	W0318 04:02:33.887149   12767 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:02:33.890266   12767 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-294000" ...
	I0318 04:02:33.898326   12767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:5b:d1:ba:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:33.900356   12767 main.go:141] libmachine: STDOUT: 
	I0318 04:02:33.900379   12767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:33.900408   12767 fix.go:56] duration metric: took 13.38775ms for fixHost
	I0318 04:02:33.900413   12767 start.go:83] releasing machines lock for "kubernetes-upgrade-294000", held for 13.40275ms
	W0318 04:02:33.900422   12767 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:02:33.900456   12767 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:33.900461   12767 start.go:728] Will try again in 5 seconds ...
	I0318 04:02:38.900706   12767 start.go:360] acquireMachinesLock for kubernetes-upgrade-294000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:02:38.901173   12767 start.go:364] duration metric: took 377.917µs to acquireMachinesLock for "kubernetes-upgrade-294000"
	I0318 04:02:38.901318   12767 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:02:38.901344   12767 fix.go:54] fixHost starting: 
	I0318 04:02:38.902110   12767 fix.go:112] recreateIfNeeded on kubernetes-upgrade-294000: state=Stopped err=<nil>
	W0318 04:02:38.902141   12767 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:02:38.911360   12767 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-294000" ...
	I0318 04:02:38.916892   12767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:5b:d1:ba:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubernetes-upgrade-294000/disk.qcow2
	I0318 04:02:38.927410   12767 main.go:141] libmachine: STDOUT: 
	I0318 04:02:38.927501   12767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:02:38.927578   12767 fix.go:56] duration metric: took 26.237458ms for fixHost
	I0318 04:02:38.927596   12767 start.go:83] releasing machines lock for "kubernetes-upgrade-294000", held for 26.391292ms
	W0318 04:02:38.927825   12767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:02:38.935288   12767 out.go:177] 
	W0318 04:02:38.938370   12767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:02:38.938407   12767 out.go:239] * 
	* 
	W0318 04:02:38.940293   12767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:02:38.948297   12767 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-294000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-294000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-294000 version --output=json: exit status 1 (55.718875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-294000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-18 04:02:39.016274 -0700 PDT m=+789.408723542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-294000 -n kubernetes-upgrade-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-294000 -n kubernetes-upgrade-294000: exit status 7 (34.848583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-294000
--- FAIL: TestKubernetesUpgrade (19.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (636.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2550753968 start -p stopped-upgrade-484000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2550753968 start -p stopped-upgrade-484000 --memory=2200 --vm-driver=qemu2 : (1m41.734617292s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2550753968 -p stopped-upgrade-484000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2550753968 -p stopped-upgrade-484000 stop: (12.113187s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.921683417s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-484000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:04:26.545191   12861 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:04:26.545333   12861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:04:26.545337   12861 out.go:304] Setting ErrFile to fd 2...
	I0318 04:04:26.545341   12861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:04:26.545496   12861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:04:26.546799   12861 out.go:298] Setting JSON to false
	I0318 04:04:26.565187   12861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7439,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:04:26.565260   12861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:04:26.570135   12861 out.go:177] * [stopped-upgrade-484000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:04:26.578149   12861 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:04:26.578201   12861 notify.go:220] Checking for updates...
	I0318 04:04:26.586127   12861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:04:26.589139   12861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:04:26.592034   12861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:04:26.595082   12861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:04:26.598125   12861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:04:26.601432   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:04:26.605066   12861 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:04:26.608088   12861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:04:26.611953   12861 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:04:26.619104   12861 start.go:297] selected driver: qemu2
	I0318 04:04:26.619110   12861 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:26.619171   12861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:04:26.621944   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:04:26.621967   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:04:26.621995   12861 start.go:340] cluster config:
	{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:26.622043   12861 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:04:26.629086   12861 out.go:177] * Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	I0318 04:04:26.633147   12861 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:04:26.633162   12861 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:04:26.633176   12861 cache.go:56] Caching tarball of preloaded images
	I0318 04:04:26.633228   12861 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:04:26.633235   12861 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:04:26.633290   12861 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0318 04:04:26.633770   12861 start.go:360] acquireMachinesLock for stopped-upgrade-484000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:04:26.633798   12861 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "stopped-upgrade-484000"
	I0318 04:04:26.633806   12861 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:04:26.633811   12861 fix.go:54] fixHost starting: 
	I0318 04:04:26.633915   12861 fix.go:112] recreateIfNeeded on stopped-upgrade-484000: state=Stopped err=<nil>
	W0318 04:04:26.633926   12861 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:04:26.638115   12861 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	I0318 04:04:26.646123   12861 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52306-:22,hostfwd=tcp::52307-:2376,hostname=stopped-upgrade-484000 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/disk.qcow2
	I0318 04:04:26.695491   12861 main.go:141] libmachine: STDOUT: 
	I0318 04:04:26.695520   12861 main.go:141] libmachine: STDERR: 
	I0318 04:04:26.695526   12861 main.go:141] libmachine: Waiting for VM to start (ssh -p 52306 docker@127.0.0.1)...
	I0318 04:04:46.879435   12861 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0318 04:04:46.879778   12861 machine.go:94] provisionDockerMachine start ...
	I0318 04:04:46.879842   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:46.880056   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:46.880063   12861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:04:46.952214   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 04:04:46.952234   12861 buildroot.go:166] provisioning hostname "stopped-upgrade-484000"
	I0318 04:04:46.952311   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:46.952479   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:46.952489   12861 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-484000 && echo "stopped-upgrade-484000" | sudo tee /etc/hostname
	I0318 04:04:47.026970   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-484000
	
	I0318 04:04:47.027040   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.027170   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.027181   12861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-484000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-484000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-484000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:04:47.095084   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:04:47.095099   12861 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18431-10628/.minikube CaCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18431-10628/.minikube}
	I0318 04:04:47.095109   12861 buildroot.go:174] setting up certificates
	I0318 04:04:47.095118   12861 provision.go:84] configureAuth start
	I0318 04:04:47.095124   12861 provision.go:143] copyHostCerts
	I0318 04:04:47.095220   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem, removing ...
	I0318 04:04:47.095229   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem
	I0318 04:04:47.095352   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.pem (1078 bytes)
	I0318 04:04:47.095558   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem, removing ...
	I0318 04:04:47.095563   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem
	I0318 04:04:47.095628   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/cert.pem (1123 bytes)
	I0318 04:04:47.095745   12861 exec_runner.go:144] found /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem, removing ...
	I0318 04:04:47.095749   12861 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem
	I0318 04:04:47.095809   12861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18431-10628/.minikube/key.pem (1675 bytes)
	I0318 04:04:47.095955   12861 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-484000 san=[127.0.0.1 localhost minikube stopped-upgrade-484000]
	I0318 04:04:47.152039   12861 provision.go:177] copyRemoteCerts
	I0318 04:04:47.152072   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:04:47.152079   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.188814   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 04:04:47.195439   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:04:47.202323   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 04:04:47.209916   12861 provision.go:87] duration metric: took 114.795292ms to configureAuth
	I0318 04:04:47.209925   12861 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:04:47.210035   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:04:47.210068   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.210155   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.210159   12861 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:04:47.275220   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:04:47.275231   12861 buildroot.go:70] root file system type: tmpfs
	I0318 04:04:47.275282   12861 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:04:47.275337   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.275443   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.275478   12861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:04:47.343434   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:04:47.343490   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.343595   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.343604   12861 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:04:47.709066   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 04:04:47.709080   12861 machine.go:97] duration metric: took 829.313833ms to provisionDockerMachine
	I0318 04:04:47.709086   12861 start.go:293] postStartSetup for "stopped-upgrade-484000" (driver="qemu2")
	I0318 04:04:47.709094   12861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:04:47.709155   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:04:47.709164   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.745314   12861 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:04:47.746620   12861 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:04:47.746628   12861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/addons for local assets ...
	I0318 04:04:47.746707   12861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18431-10628/.minikube/files for local assets ...
	I0318 04:04:47.746818   12861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem -> 110352.pem in /etc/ssl/certs
	I0318 04:04:47.746935   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:04:47.749878   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:04:47.756865   12861 start.go:296] duration metric: took 47.774667ms for postStartSetup
	I0318 04:04:47.756881   12861 fix.go:56] duration metric: took 21.123598375s for fixHost
	I0318 04:04:47.756915   12861 main.go:141] libmachine: Using SSH client type: native
	I0318 04:04:47.757012   12861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10127dbf0] 0x101280450 <nil>  [] 0s} localhost 52306 <nil> <nil>}
	I0318 04:04:47.757016   12861 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 04:04:47.825634   12861 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710759888.083216504
	
	I0318 04:04:47.825645   12861 fix.go:216] guest clock: 1710759888.083216504
	I0318 04:04:47.825650   12861 fix.go:229] Guest: 2024-03-18 04:04:48.083216504 -0700 PDT Remote: 2024-03-18 04:04:47.756882 -0700 PDT m=+21.245155209 (delta=326.334504ms)
	I0318 04:04:47.825661   12861 fix.go:200] guest clock delta is within tolerance: 326.334504ms
	I0318 04:04:47.825664   12861 start.go:83] releasing machines lock for "stopped-upgrade-484000", held for 21.192392625s
	I0318 04:04:47.825738   12861 ssh_runner.go:195] Run: cat /version.json
	I0318 04:04:47.825747   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:04:47.825764   12861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:04:47.825787   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	W0318 04:04:47.826405   12861 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52306: connect: connection refused
	I0318 04:04:47.826429   12861 retry.go:31] will retry after 297.432327ms: dial tcp [::1]:52306: connect: connection refused
	W0318 04:04:47.862298   12861 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:04:47.862360   12861 ssh_runner.go:195] Run: systemctl --version
	I0318 04:04:47.864225   12861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:04:47.865673   12861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:04:47.865706   12861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:04:47.868933   12861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:04:47.873217   12861 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:04:47.873226   12861 start.go:494] detecting cgroup driver to use...
	I0318 04:04:47.873299   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:04:47.880736   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:04:47.883785   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:04:47.886651   12861 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:04:47.886677   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:04:47.890009   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:04:47.893568   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:04:47.896967   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:04:47.900463   12861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:04:47.903198   12861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:04:47.906455   12861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:04:47.909557   12861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:04:47.912196   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:47.974123   12861 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:04:47.980442   12861 start.go:494] detecting cgroup driver to use...
	I0318 04:04:47.980513   12861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:04:47.989276   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:04:47.994234   12861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:04:48.000032   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:04:48.004264   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:04:48.008656   12861 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 04:04:48.060444   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:04:48.065257   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:04:48.070613   12861 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:04:48.071864   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:04:48.074628   12861 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:04:48.079635   12861 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:04:48.160375   12861 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:04:48.248879   12861 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:04:48.248959   12861 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:04:48.254999   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:48.333077   12861 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:49.487322   12861 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1542545s)
	I0318 04:04:49.487404   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:04:49.492004   12861 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 04:04:49.497783   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:49.502153   12861 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:04:49.566209   12861 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:04:49.622006   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:49.706302   12861 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:04:49.711830   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:04:49.716190   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:49.793297   12861 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:04:49.833986   12861 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:04:49.834059   12861 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:04:49.837137   12861 start.go:562] Will wait 60s for crictl version
	I0318 04:04:49.837196   12861 ssh_runner.go:195] Run: which crictl
	I0318 04:04:49.838561   12861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:04:49.854124   12861 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:04:49.854193   12861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:49.871470   12861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:04:49.893285   12861 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:04:49.893357   12861 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:04:49.894584   12861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:04:49.898446   12861 kubeadm.go:877] updating cluster {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:04:49.898495   12861 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:04:49.898542   12861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:49.909145   12861 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:49.909152   12861 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:49.909193   12861 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:49.912141   12861 ssh_runner.go:195] Run: which lz4
	I0318 04:04:49.913288   12861 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 04:04:49.914408   12861 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:04:49.914419   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:04:50.629072   12861 docker.go:649] duration metric: took 715.835458ms to copy over tarball
	I0318 04:04:50.629134   12861 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:04:51.820973   12861 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.191854875s)
	I0318 04:04:51.820987   12861 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:04:51.836851   12861 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:04:51.839982   12861 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:04:51.845054   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:51.906879   12861 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:04:53.514337   12861 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.607480334s)
	I0318 04:04:53.514434   12861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:04:53.526771   12861 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:04:53.526784   12861 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:04:53.526789   12861 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:04:53.533785   12861 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:53.533859   12861 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:04:53.534040   12861 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:53.534064   12861 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:53.534100   12861 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:53.534105   12861 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:53.534657   12861 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:53.534728   12861 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:53.542161   12861 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:53.543990   12861 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:53.544004   12861 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:04:53.544037   12861 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:53.544048   12861 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:53.544063   12861 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:53.544093   12861 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:53.544102   12861 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.452179   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.489550   12861 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:04:55.489601   12861 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.489696   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:04:55.509813   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.509851   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:04:55.524569   12861 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:04:55.524593   12861 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.524656   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:04:55.536688   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:04:55.552465   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:04:55.564275   12861 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:04:55.564299   12861 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:04:55.564350   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:04:55.576322   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.577579   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:04:55.577664   12861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0318 04:04:55.584955   12861 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:55.585078   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.587453   12861 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:04:55.587487   12861 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.587512   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:04:55.587533   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:04:55.587618   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:04:55.592711   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.601655   12861 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:04:55.601682   12861 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.601732   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:04:55.603656   12861 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:04:55.603666   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:04:55.613512   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:04:55.621476   12861 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:04:55.621501   12861 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.621552   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:04:55.621562   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:04:55.621654   12861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:55.646474   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.653014   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0318 04:04:55.653094   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:04:55.653161   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:04:55.653178   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:04:55.665844   12861 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:04:55.665867   12861 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.665923   12861 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:04:55.693886   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:04:55.695272   12861 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:04:55.695282   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:04:55.736806   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0318 04:04:56.062269   12861 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:04:56.062913   12861 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.098797   12861 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:04:56.098841   12861 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.098947   12861 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:04:56.127690   12861 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:04:56.127830   12861 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:56.129750   12861 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:04:56.129764   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:04:56.155550   12861 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:04:56.155566   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:04:56.391542   12861 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:04:56.391591   12861 cache_images.go:92] duration metric: took 2.86486475s to LoadCachedImages
	W0318 04:04:56.391639   12861 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 04:04:56.391645   12861 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:04:56.391696   12861 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-484000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:04:56.391773   12861 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:04:56.405156   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:04:56.405169   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:04:56.405174   12861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:04:56.405183   12861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-484000 NodeName:stopped-upgrade-484000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:04:56.405253   12861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-484000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:04:56.405314   12861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:04:56.408550   12861 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:04:56.408583   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:04:56.411077   12861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:04:56.415920   12861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:04:56.420567   12861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:04:56.425991   12861 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:04:56.427311   12861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:04:56.430688   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:04:56.516369   12861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:04:56.521999   12861 certs.go:68] Setting up /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000 for IP: 10.0.2.15
	I0318 04:04:56.522007   12861 certs.go:194] generating shared ca certs ...
	I0318 04:04:56.522016   12861 certs.go:226] acquiring lock for ca certs: {Name:mk88e7d2813085015a17ae8f4e4d52088bf47170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.522173   12861 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key
	I0318 04:04:56.522231   12861 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key
	I0318 04:04:56.522238   12861 certs.go:256] generating profile certs ...
	I0318 04:04:56.522316   12861 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key
	I0318 04:04:56.522334   12861 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968
	I0318 04:04:56.522348   12861 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:04:56.647490   12861 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 ...
	I0318 04:04:56.647502   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968: {Name:mk9602298f28832d9adc36f9a82559b6f11a161e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.647760   12861 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 ...
	I0318 04:04:56.647766   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968: {Name:mk41fb4ce1b695471d92aa2b46e2d02fc820d357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.647897   12861 certs.go:381] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt
	I0318 04:04:56.648024   12861 certs.go:385] copying /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key
	I0318 04:04:56.648158   12861 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.key
	I0318 04:04:56.648272   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem (1338 bytes)
	W0318 04:04:56.648302   12861 certs.go:480] ignoring /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035_empty.pem, impossibly tiny 0 bytes
	I0318 04:04:56.648308   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 04:04:56.648335   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem (1078 bytes)
	I0318 04:04:56.648352   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:04:56.648368   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/key.pem (1675 bytes)
	I0318 04:04:56.648403   12861 certs.go:484] found cert: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem (1708 bytes)
	I0318 04:04:56.648749   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:04:56.655461   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 04:04:56.662254   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:04:56.669786   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 04:04:56.676588   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:04:56.683094   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 04:04:56.690083   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:04:56.697363   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 04:04:56.704432   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:04:56.711755   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/11035.pem --> /usr/share/ca-certificates/11035.pem (1338 bytes)
	I0318 04:04:56.718409   12861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/ssl/certs/110352.pem --> /usr/share/ca-certificates/110352.pem (1708 bytes)
	I0318 04:04:56.725534   12861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:04:56.730689   12861 ssh_runner.go:195] Run: openssl version
	I0318 04:04:56.732922   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:04:56.735742   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.737033   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.737057   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:04:56.738659   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:04:56.741948   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11035.pem && ln -fs /usr/share/ca-certificates/11035.pem /etc/ssl/certs/11035.pem"
	I0318 04:04:56.745003   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.746440   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 10:51 /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.746462   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11035.pem
	I0318 04:04:56.748189   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11035.pem /etc/ssl/certs/51391683.0"
	I0318 04:04:56.750920   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110352.pem && ln -fs /usr/share/ca-certificates/110352.pem /etc/ssl/certs/110352.pem"
	I0318 04:04:56.754243   12861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.755626   12861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 10:51 /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.755646   12861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110352.pem
	I0318 04:04:56.757264   12861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110352.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:04:56.760340   12861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:04:56.761568   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:04:56.764075   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:04:56.765992   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:04:56.768297   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:04:56.770188   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:04:56.771958   12861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:04:56.773670   12861 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:04:56.773743   12861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:56.783768   12861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:04:56.786743   12861 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:04:56.786749   12861 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:04:56.786752   12861 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:04:56.786777   12861 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:04:56.790216   12861 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:04:56.790524   12861 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-484000" does not appear in /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:04:56.790621   12861 kubeconfig.go:62] /Users/jenkins/minikube-integration/18431-10628/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-484000" cluster setting kubeconfig missing "stopped-upgrade-484000" context setting]
	I0318 04:04:56.790823   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:04:56.791259   12861 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10256ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:04:56.791568   12861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:04:56.794351   12861 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-484000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:04:56.794358   12861 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:04:56.794400   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:04:56.805497   12861 docker.go:483] Stopping containers: [85734f46676a cce46acfb903 4f03456025e7 f398fedc3f15 2e0bd8ab6807 3d030fdab3e2 e7d513883fbd 165f5ca45430]
	I0318 04:04:56.805567   12861 ssh_runner.go:195] Run: docker stop 85734f46676a cce46acfb903 4f03456025e7 f398fedc3f15 2e0bd8ab6807 3d030fdab3e2 e7d513883fbd 165f5ca45430
	I0318 04:04:56.816514   12861 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:04:56.822390   12861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:04:56.825693   12861 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:04:56.825702   12861 kubeadm.go:156] found existing configuration files:
	
	I0318 04:04:56.825729   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0318 04:04:56.828682   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:04:56.828708   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:04:56.831142   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0318 04:04:56.834025   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:04:56.834054   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:04:56.837210   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0318 04:04:56.840004   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:04:56.840042   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:04:56.842685   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0318 04:04:56.845607   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:04:56.845631   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:04:56.848698   12861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:04:56.851623   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:56.872784   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.338791   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.450581   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.472689   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:04:57.493618   12861 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:04:57.493706   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:57.995928   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:58.495752   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:04:58.501891   12861 api_server.go:72] duration metric: took 1.008300958s to wait for apiserver process to appear ...
	I0318 04:04:58.501906   12861 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:04:58.501923   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:03.503900   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:03.503928   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:08.504089   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:08.504151   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:13.505028   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:13.505074   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:18.505540   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:18.505558   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:23.506181   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:23.506232   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:28.507192   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:28.507247   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:33.508514   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:33.508560   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:38.510415   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:38.510500   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:43.512894   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:43.512929   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:48.514628   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:48.514786   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:53.517115   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:53.517165   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:05:58.519248   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:05:58.519375   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:05:58.535042   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:05:58.535117   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:05:58.547162   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:05:58.547236   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:05:58.567596   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:05:58.567672   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:05:58.582794   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:05:58.582864   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:05:58.593176   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:05:58.593242   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:05:58.603058   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:05:58.603140   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:05:58.613206   12861 logs.go:276] 0 containers: []
	W0318 04:05:58.613219   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:05:58.613280   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:05:58.624291   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:05:58.624310   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:05:58.624315   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:05:58.651105   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:05:58.651116   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:05:58.662480   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:05:58.662493   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:05:58.677332   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:05:58.677347   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:05:58.696273   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:05:58.696286   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:05:58.707714   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:05:58.707724   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:05:58.746285   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:05:58.746293   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:05:58.860732   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:05:58.860746   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:05:58.875382   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:05:58.875393   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:05:58.893172   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:05:58.893184   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:05:58.905289   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:05:58.905300   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:05:58.916727   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:05:58.916739   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:05:58.928812   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:05:58.928824   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:05:58.933452   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:05:58.933459   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:05:58.947244   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:05:58.947256   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:05:58.963701   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:05:58.963713   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:05:58.976560   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:05:58.976573   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:01.508359   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:06.510071   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:06.510236   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:06.525358   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:06.525445   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:06.538848   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:06.538926   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:06.549073   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:06.549143   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:06.559344   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:06.559431   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:06.569542   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:06.569612   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:06.580590   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:06.580662   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:06.591043   12861 logs.go:276] 0 containers: []
	W0318 04:06:06.591056   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:06.591112   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:06.601496   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:06.601514   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:06.601520   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:06.605950   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:06.605957   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:06.619514   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:06.619530   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:06.630944   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:06.630957   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:06.656449   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:06.656465   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:06.668298   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:06.668308   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:06.680486   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:06.680497   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:06.692535   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:06.692549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:06.717389   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:06.717398   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:06.731039   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:06.731051   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:06.745662   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:06.745672   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:06.763522   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:06.763536   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:06.801991   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:06.802003   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:06.841377   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:06.841389   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:06.855256   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:06.855267   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:06.867079   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:06.867093   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:06.884056   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:06.884068   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:09.397071   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:14.399265   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:14.399419   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:14.411862   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:14.411931   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:14.422262   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:14.422324   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:14.432425   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:14.432513   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:14.442442   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:14.442518   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:14.452761   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:14.452831   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:14.464933   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:14.464999   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:14.475381   12861 logs.go:276] 0 containers: []
	W0318 04:06:14.475392   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:14.475448   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:14.485494   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:14.485524   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:14.485530   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:14.497754   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:14.497764   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:14.508977   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:14.508988   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:14.534720   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:14.534731   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:14.571516   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:14.571525   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:14.575189   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:14.575198   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:14.610375   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:14.610386   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:14.632126   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:14.632137   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:14.644155   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:14.644168   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:14.659161   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:14.659175   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:14.670681   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:14.670691   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:14.686764   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:14.686777   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:14.700866   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:14.700879   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:14.715311   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:14.715325   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:14.726399   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:14.726410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:14.750957   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:14.750968   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:14.762773   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:14.762785   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:17.276675   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:22.278873   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:22.278997   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:22.292176   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:22.292255   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:22.304130   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:22.304202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:22.313871   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:22.313932   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:22.324545   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:22.324606   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:22.335250   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:22.335323   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:22.347323   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:22.347382   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:22.365974   12861 logs.go:276] 0 containers: []
	W0318 04:06:22.365986   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:22.366044   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:22.376297   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:22.376316   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:22.376322   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:22.416779   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:22.416789   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:22.421387   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:22.421394   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:22.432475   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:22.432486   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:22.444165   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:22.444176   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:22.461992   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:22.462003   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:22.474431   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:22.474441   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:22.489028   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:22.489038   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:22.512438   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:22.512449   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:22.523767   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:22.523779   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:22.535116   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:22.535129   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:22.546714   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:22.546728   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:22.558190   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:22.558202   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:22.597282   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:22.597295   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:22.611599   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:22.611613   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:22.625901   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:22.625911   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:22.646165   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:22.646177   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:25.171851   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:30.174017   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:30.174139   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:30.185058   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:30.185145   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:30.195981   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:30.196056   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:30.206540   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:30.206613   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:30.216655   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:30.216736   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:30.227189   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:30.227271   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:30.237369   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:30.237436   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:30.248615   12861 logs.go:276] 0 containers: []
	W0318 04:06:30.248630   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:30.248693   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:30.262555   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:30.262594   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:30.262600   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:30.274319   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:30.274330   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:30.278372   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:30.278382   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:30.290198   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:30.290211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:30.301750   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:30.301761   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:30.326433   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:30.326442   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:30.337980   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:30.337990   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:30.376516   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:30.376527   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:30.390046   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:30.390057   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:30.414077   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:30.414088   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:30.428380   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:30.428391   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:30.440572   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:30.440582   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:30.452491   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:30.452504   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:30.488404   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:30.488415   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:30.502542   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:30.502553   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:30.516931   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:30.516944   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:30.535893   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:30.535902   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:33.050365   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:38.051252   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:38.051351   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:38.066898   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:38.066977   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:38.078224   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:38.078304   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:38.091188   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:38.091261   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:38.102387   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:38.102457   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:38.112576   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:38.112666   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:38.123545   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:38.123616   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:38.133581   12861 logs.go:276] 0 containers: []
	W0318 04:06:38.133594   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:38.133652   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:38.144407   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:38.144426   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:38.144431   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:38.156482   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:38.156494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:38.173277   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:38.173292   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:38.185402   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:38.185414   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:38.197550   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:38.197560   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:38.211718   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:38.211728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:38.226064   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:38.226074   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:38.245492   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:38.245502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:38.258087   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:38.258098   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:38.262101   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:38.262111   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:38.298490   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:38.298502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:38.309871   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:38.309881   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:38.321073   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:38.321082   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:38.346347   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:38.346358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:38.383199   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:38.383212   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:38.402951   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:38.402960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:38.427687   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:38.427700   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:40.943746   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:45.945977   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:45.946100   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:45.958778   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:45.958855   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:45.969756   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:45.969834   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:45.983014   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:45.983099   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:45.994947   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:45.995049   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:46.008558   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:46.008631   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:46.020554   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:46.020632   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:46.031545   12861 logs.go:276] 0 containers: []
	W0318 04:06:46.031559   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:46.031617   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:46.042569   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:46.042587   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:46.042593   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:46.056204   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:46.056214   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:46.080718   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:46.080728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:46.095606   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:46.095617   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:46.107580   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:46.107592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:46.122167   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:46.122178   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:46.138957   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:46.138972   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:46.151274   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:46.151284   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:46.156248   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:46.156254   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:46.179823   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:46.179831   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:46.191995   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:46.192006   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:46.213472   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:46.213484   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:46.224781   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:46.224793   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:46.261778   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:46.261791   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:46.275727   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:46.275738   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:46.313627   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:46.313637   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:46.325820   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:46.325831   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:48.844866   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:06:53.847131   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:06:53.847244   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:06:53.858934   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:06:53.859012   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:06:53.870317   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:06:53.870394   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:06:53.881917   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:06:53.881987   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:06:53.898517   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:06:53.898594   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:06:53.909608   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:06:53.909688   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:06:53.921043   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:06:53.921121   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:06:53.932843   12861 logs.go:276] 0 containers: []
	W0318 04:06:53.932857   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:06:53.932927   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:06:53.944118   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:06:53.944139   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:06:53.944144   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:06:53.960073   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:06:53.960085   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:06:53.979755   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:06:53.979765   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:06:53.991746   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:06:53.991756   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:06:54.028635   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:06:54.028653   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:06:54.033056   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:06:54.033062   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:06:54.046359   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:06:54.046369   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:06:54.069833   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:06:54.069845   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:06:54.080988   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:06:54.081000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:06:54.104625   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:06:54.104633   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:06:54.117623   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:06:54.117636   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:06:54.151614   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:06:54.151626   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:06:54.165510   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:06:54.165520   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:06:54.176970   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:06:54.176981   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:06:54.188593   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:06:54.188604   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:06:54.202741   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:06:54.202751   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:06:54.213930   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:06:54.213943   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:06:56.726187   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:01.726705   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:01.726788   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:01.743189   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:01.743264   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:01.762593   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:01.762677   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:01.776797   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:01.776873   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:01.788570   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:01.788644   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:01.799844   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:01.799920   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:01.811935   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:01.812011   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:01.824053   12861 logs.go:276] 0 containers: []
	W0318 04:07:01.824064   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:01.824132   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:01.835228   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:01.835244   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:01.835249   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:01.839748   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:01.839760   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:01.855551   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:01.855562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:01.867536   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:01.867549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:01.884510   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:01.884527   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:01.897012   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:01.897026   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:01.937085   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:01.937105   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:01.968211   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:01.968223   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:01.983377   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:01.983389   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:02.000926   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:02.000939   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:02.012475   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:02.012487   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:02.024175   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:02.024187   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:02.060157   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:02.060170   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:02.078928   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:02.078939   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:02.093772   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:02.093783   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:02.105124   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:02.105134   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:02.129525   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:02.129534   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:04.644651   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:09.646810   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:09.646897   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:09.658315   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:09.658388   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:09.669629   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:09.669716   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:09.680934   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:09.681016   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:09.691949   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:09.692023   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:09.703402   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:09.703476   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:09.715631   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:09.715700   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:09.726836   12861 logs.go:276] 0 containers: []
	W0318 04:07:09.726848   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:09.726909   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:09.738119   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:09.738141   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:09.738147   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:09.750785   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:09.750799   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:09.755388   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:09.755398   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:09.791696   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:09.791709   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:09.827876   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:09.827889   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:09.848153   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:09.848166   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:09.863831   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:09.863845   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:09.877407   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:09.877419   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:09.889480   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:09.889492   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:09.929280   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:09.929292   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:09.943683   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:09.943693   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:09.958451   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:09.958464   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:09.970547   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:09.970557   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:09.983925   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:09.983937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:09.998636   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:09.998647   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:10.010591   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:10.010601   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:10.027950   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:10.027960   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:12.553025   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:17.553622   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:17.553705   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:17.564910   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:17.564980   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:17.575616   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:17.575690   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:17.587083   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:17.587149   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:17.599703   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:17.599777   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:17.611390   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:17.611476   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:17.622853   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:17.622931   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:17.633692   12861 logs.go:276] 0 containers: []
	W0318 04:07:17.633703   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:17.633765   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:17.645595   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:17.645617   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:17.645622   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:17.659249   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:17.659261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:17.674007   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:17.674020   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:17.698181   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:17.698194   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:17.711969   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:17.711978   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:17.754505   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:17.754514   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:17.793790   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:17.793804   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:17.808514   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:17.808529   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:17.829787   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:17.829798   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:17.842613   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:17.842627   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:17.861248   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:17.861261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:17.883149   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:17.883162   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:17.888336   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:17.888350   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:17.913799   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:17.913814   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:17.928931   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:17.928943   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:17.943930   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:17.943949   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:17.960109   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:17.960122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:20.473607   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:25.475911   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:25.475983   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:25.488104   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:25.488181   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:25.499126   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:25.499202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:25.509884   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:25.509956   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:25.524725   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:25.524798   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:25.537035   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:25.537105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:25.549410   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:25.549486   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:25.560748   12861 logs.go:276] 0 containers: []
	W0318 04:07:25.560757   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:25.560818   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:25.572130   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:25.572150   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:25.572156   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:25.613118   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:25.613136   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:25.625465   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:25.625477   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:25.638125   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:25.638137   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:25.642579   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:25.642587   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:25.657541   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:25.657549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:25.672584   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:25.672594   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:25.687782   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:25.687793   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:25.700672   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:25.700684   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:25.713308   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:25.713318   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:25.740953   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:25.740964   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:25.756317   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:25.756326   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:25.768667   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:25.768680   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:25.794537   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:25.794557   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:25.807180   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:25.807199   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:25.844361   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:25.844377   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:25.866747   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:25.866762   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:28.379502   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:33.381548   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:33.381629   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:33.400011   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:33.400090   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:33.411252   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:33.411326   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:33.424086   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:33.424161   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:33.436003   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:33.436081   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:33.451930   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:33.452005   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:33.463137   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:33.463212   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:33.473959   12861 logs.go:276] 0 containers: []
	W0318 04:07:33.473973   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:33.474034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:33.486015   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:33.486037   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:33.486042   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:33.526983   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:33.526999   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:33.545444   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:33.545461   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:33.558746   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:33.558759   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:33.583404   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:33.583422   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:33.588062   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:33.588076   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:33.625582   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:33.625596   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:33.640956   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:33.640968   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:33.656987   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:33.657000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:33.669626   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:33.669638   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:33.691404   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:33.691417   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:33.704464   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:33.704474   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:33.717730   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:33.717743   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:33.738943   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:33.738960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:33.752320   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:33.752335   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:33.764721   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:33.764731   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:33.790438   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:33.790448   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:36.305326   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:41.307755   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:41.308007   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:41.330498   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:41.330593   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:41.345682   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:41.345763   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:41.358673   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:41.358743   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:41.370747   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:41.370824   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:41.381744   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:41.381818   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:41.393653   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:41.393721   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:41.405278   12861 logs.go:276] 0 containers: []
	W0318 04:07:41.405290   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:41.405349   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:41.424370   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:41.424386   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:41.424390   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:41.437434   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:41.437446   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:41.455385   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:41.455395   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:41.467595   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:41.467606   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:41.479695   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:41.479706   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:41.494226   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:41.494236   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:41.506122   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:41.506133   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:41.521805   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:41.521816   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:41.534306   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:41.534316   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:41.574039   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:41.574061   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:41.611667   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:41.611680   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:41.638130   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:41.638146   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:41.652901   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:41.652914   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:41.657479   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:41.657491   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:41.672490   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:41.672501   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:41.697723   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:41.697742   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:41.713551   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:41.713570   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:44.231777   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:49.234136   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:49.234521   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:49.264900   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:49.265034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:49.285565   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:49.285655   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:49.300893   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:49.300978   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:49.313215   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:49.313293   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:49.325180   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:49.325256   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:49.337601   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:49.337640   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:49.349068   12861 logs.go:276] 0 containers: []
	W0318 04:07:49.349076   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:49.349108   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:49.360783   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:49.360798   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:49.360803   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:07:49.373420   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:49.373429   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:49.414597   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:49.414609   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:49.429734   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:49.429745   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:49.444553   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:49.444568   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:49.462784   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:49.462793   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:49.467175   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:49.467186   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:49.479965   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:49.479977   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:49.492147   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:49.492158   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:49.515726   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:49.515742   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:49.528747   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:49.528758   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:49.566686   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:49.566699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:49.601139   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:49.601150   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:49.623262   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:49.623284   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:49.635552   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:49.635567   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:49.648682   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:49.648699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:49.664186   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:49.664202   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:52.178604   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:07:57.180697   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:07:57.180803   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:07:57.191770   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:07:57.191845   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:07:57.202466   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:07:57.202538   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:07:57.213695   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:07:57.213781   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:07:57.228641   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:07:57.228711   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:07:57.239957   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:07:57.240001   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:07:57.251942   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:07:57.251999   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:07:57.262700   12861 logs.go:276] 0 containers: []
	W0318 04:07:57.262713   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:07:57.262778   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:07:57.280395   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:07:57.280412   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:07:57.280417   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:07:57.296195   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:07:57.296213   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:07:57.320988   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:07:57.321000   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:07:57.333545   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:07:57.333557   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:07:57.346326   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:07:57.346339   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:07:57.359123   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:07:57.359137   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:07:57.399701   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:07:57.399715   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:07:57.414243   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:07:57.414255   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:07:57.439820   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:07:57.439832   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:07:57.452482   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:07:57.452494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:07:57.467920   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:07:57.467937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:07:57.493840   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:07:57.493852   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:07:57.512363   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:07:57.512376   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:07:57.525379   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:07:57.525393   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:07:57.530227   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:07:57.530240   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:07:57.568930   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:07:57.568941   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:07:57.580665   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:07:57.580676   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:00.095714   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:05.097992   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:05.098186   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:05.110115   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:05.110197   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:05.121295   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:05.121368   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:05.132712   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:05.132793   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:05.145284   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:05.145362   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:05.156739   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:05.156813   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:05.176498   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:05.176576   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:05.189026   12861 logs.go:276] 0 containers: []
	W0318 04:08:05.189039   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:05.189105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:05.200993   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:05.201014   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:05.201021   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:05.242892   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:05.242904   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:05.280748   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:05.280756   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:05.295441   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:05.295450   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:05.308189   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:05.308204   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:05.320898   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:05.320909   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:05.336382   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:05.336395   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:05.349385   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:05.349399   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:05.365160   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:05.365173   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:05.376882   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:05.376894   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:05.400684   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:05.400699   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:05.427557   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:05.427567   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:05.443035   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:05.443045   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:05.456368   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:05.456380   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:05.460987   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:05.460996   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:05.473704   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:05.473714   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:05.493156   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:05.493167   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:08.006708   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:13.007689   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:13.007926   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:13.031263   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:13.031385   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:13.047822   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:13.047896   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:13.061962   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:13.062040   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:13.073006   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:13.073083   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:13.084371   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:13.084442   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:13.095890   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:13.095961   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:13.107183   12861 logs.go:276] 0 containers: []
	W0318 04:08:13.107195   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:13.107258   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:13.118626   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:13.118645   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:13.118651   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:13.135315   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:13.135329   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:13.147544   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:13.147557   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:13.172882   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:13.172899   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:13.189108   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:13.189122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:13.202736   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:13.202750   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:13.223535   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:13.223549   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:13.236879   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:13.236895   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:13.251853   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:13.251865   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:13.256348   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:13.256358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:13.294892   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:13.294902   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:13.310751   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:13.310764   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:13.324463   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:13.324475   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:13.364620   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:13.364636   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:13.391079   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:13.391091   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:13.404962   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:13.404975   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:13.418345   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:13.418361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:15.936138   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:20.938483   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:20.938747   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:20.959544   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:20.959649   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:20.975131   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:20.975209   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:20.989085   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:20.989159   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:21.000680   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:21.000766   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:21.012000   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:21.012078   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:21.023064   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:21.023140   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:21.034485   12861 logs.go:276] 0 containers: []
	W0318 04:08:21.034497   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:21.034559   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:21.045781   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:21.045803   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:21.045809   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:21.050320   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:21.050330   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:21.065290   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:21.065305   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:21.077408   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:21.077421   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:21.089401   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:21.089413   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:21.104403   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:21.104415   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:21.120345   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:21.120358   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:21.144550   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:21.144565   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:21.184430   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:21.184450   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:21.210614   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:21.210635   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:21.226879   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:21.226893   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:21.239699   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:21.239712   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:21.258532   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:21.258545   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:21.276701   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:21.276714   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:21.337744   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:21.337756   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:21.353091   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:21.353107   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:21.368812   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:21.368829   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:23.883204   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:28.885311   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:28.885434   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:28.897309   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:28.897393   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:28.908644   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:28.908721   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:28.919982   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:28.920051   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:28.931154   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:28.931242   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:28.943378   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:28.943459   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:28.961053   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:28.961131   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:28.974904   12861 logs.go:276] 0 containers: []
	W0318 04:08:28.974918   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:28.974984   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:28.990294   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:28.990314   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:28.990320   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:29.017236   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:29.017248   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:29.034687   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:29.034702   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:29.048588   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:29.048603   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:29.060722   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:29.060735   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:29.072750   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:29.072764   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:29.112578   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:29.112589   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:29.117804   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:29.117819   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:29.137548   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:29.137563   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:29.150443   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:29.150457   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:29.166217   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:29.166225   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:29.178569   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:29.178579   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:29.191641   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:29.191653   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:29.215444   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:29.215461   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:29.251935   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:29.251947   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:29.272705   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:29.272720   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:29.285407   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:29.285419   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:31.806858   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:36.809234   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:36.809624   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:36.847722   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:36.847867   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:36.869276   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:36.869381   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:36.884438   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:36.884521   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:36.897404   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:36.897475   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:36.908481   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:36.908550   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:36.919347   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:36.919413   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:36.929754   12861 logs.go:276] 0 containers: []
	W0318 04:08:36.929772   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:36.929830   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:36.940790   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:36.940808   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:36.940814   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:36.954990   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:36.955001   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:36.981586   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:36.981597   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:36.999115   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:36.999126   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:37.010614   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:37.010627   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:37.022362   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:37.022374   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:37.058010   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:37.058022   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:37.073309   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:37.073319   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:37.089595   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:37.089606   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:37.094133   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:37.094142   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:37.117108   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:37.117118   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:37.140604   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:37.140612   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:37.155317   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:37.155332   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:37.167895   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:37.167907   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:37.180509   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:37.180521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:37.193517   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:37.193529   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:37.205317   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:37.205329   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:39.744558   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:44.746682   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:44.746792   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:44.760740   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:44.760815   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:44.772482   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:44.772555   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:44.782740   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:44.782810   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:44.793145   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:44.793220   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:44.803760   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:44.803827   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:44.814525   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:44.814595   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:44.825270   12861 logs.go:276] 0 containers: []
	W0318 04:08:44.825283   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:44.825346   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:44.835786   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:44.835808   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:44.835813   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:44.847212   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:44.847227   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:44.864388   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:44.864400   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:44.875425   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:44.875435   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:44.898505   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:44.898518   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:44.902738   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:44.902750   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:44.915777   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:44.915789   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:44.928389   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:44.928399   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:44.939975   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:44.939984   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:44.954998   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:44.955009   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:44.992971   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:44.992983   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:45.007157   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:45.007166   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:45.021883   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:45.021896   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:45.033795   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:45.033807   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:45.045813   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:45.045827   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:45.083413   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:45.083432   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:45.111047   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:45.111058   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:47.627060   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:08:52.629221   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:08:52.629395   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:08:52.641210   12861 logs.go:276] 2 containers: [90ec027dcc21 2e0bd8ab6807]
	I0318 04:08:52.641292   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:08:52.652073   12861 logs.go:276] 2 containers: [f5c15fd7db20 f398fedc3f15]
	I0318 04:08:52.652142   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:08:52.662760   12861 logs.go:276] 1 containers: [7bfe70c8d867]
	I0318 04:08:52.662841   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:08:52.673501   12861 logs.go:276] 2 containers: [680d3f2c89c2 85734f46676a]
	I0318 04:08:52.673575   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:08:52.684786   12861 logs.go:276] 1 containers: [5c83b6a595e4]
	I0318 04:08:52.684862   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:08:52.695769   12861 logs.go:276] 2 containers: [df815b32a29e cce46acfb903]
	I0318 04:08:52.695845   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:08:52.708632   12861 logs.go:276] 0 containers: []
	W0318 04:08:52.708645   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:08:52.708704   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:08:52.719202   12861 logs.go:276] 2 containers: [b8ccd69ba524 c37f7aade2c3]
	I0318 04:08:52.719224   12861 logs.go:123] Gathering logs for etcd [f5c15fd7db20] ...
	I0318 04:08:52.719230   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5c15fd7db20"
	I0318 04:08:52.733666   12861 logs.go:123] Gathering logs for storage-provisioner [c37f7aade2c3] ...
	I0318 04:08:52.733679   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c37f7aade2c3"
	I0318 04:08:52.745854   12861 logs.go:123] Gathering logs for etcd [f398fedc3f15] ...
	I0318 04:08:52.745870   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f398fedc3f15"
	I0318 04:08:52.761337   12861 logs.go:123] Gathering logs for coredns [7bfe70c8d867] ...
	I0318 04:08:52.761350   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bfe70c8d867"
	I0318 04:08:52.774123   12861 logs.go:123] Gathering logs for kube-scheduler [680d3f2c89c2] ...
	I0318 04:08:52.774135   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 680d3f2c89c2"
	I0318 04:08:52.789531   12861 logs.go:123] Gathering logs for kube-controller-manager [cce46acfb903] ...
	I0318 04:08:52.789541   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cce46acfb903"
	I0318 04:08:52.805333   12861 logs.go:123] Gathering logs for storage-provisioner [b8ccd69ba524] ...
	I0318 04:08:52.805345   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ccd69ba524"
	I0318 04:08:52.817152   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:08:52.817163   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:08:52.854889   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:08:52.854898   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:08:52.890138   12861 logs.go:123] Gathering logs for kube-apiserver [90ec027dcc21] ...
	I0318 04:08:52.890155   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90ec027dcc21"
	I0318 04:08:52.905140   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:08:52.905154   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:08:52.923655   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:08:52.923667   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:08:52.944874   12861 logs.go:123] Gathering logs for kube-proxy [5c83b6a595e4] ...
	I0318 04:08:52.944882   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c83b6a595e4"
	I0318 04:08:52.960577   12861 logs.go:123] Gathering logs for kube-controller-manager [df815b32a29e] ...
	I0318 04:08:52.960592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df815b32a29e"
	I0318 04:08:52.978683   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:08:52.978699   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:08:52.982846   12861 logs.go:123] Gathering logs for kube-apiserver [2e0bd8ab6807] ...
	I0318 04:08:52.982852   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0bd8ab6807"
	I0318 04:08:53.013277   12861 logs.go:123] Gathering logs for kube-scheduler [85734f46676a] ...
	I0318 04:08:53.013288   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85734f46676a"
	I0318 04:08:55.530088   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:00.531440   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:00.531531   12861 kubeadm.go:591] duration metric: took 4m3.750867375s to restartPrimaryControlPlane
	W0318 04:09:00.531600   12861 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:09:00.531633   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:09:01.575372   12861 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.043709417s)
	I0318 04:09:01.575453   12861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:09:01.580409   12861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:09:01.583106   12861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:09:01.585641   12861 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:09:01.585647   12861 kubeadm.go:156] found existing configuration files:
	
	I0318 04:09:01.585672   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf
	I0318 04:09:01.588599   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:09:01.588623   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:09:01.591494   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf
	I0318 04:09:01.594026   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:09:01.594046   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:09:01.597139   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf
	I0318 04:09:01.600192   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:09:01.600216   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:09:01.602667   12861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf
	I0318 04:09:01.605386   12861 kubeadm.go:162] "https://control-plane.minikube.internal:52368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:09:01.605408   12861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:09:01.608434   12861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:09:01.626784   12861 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:09:01.626856   12861 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:09:01.676927   12861 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:09:01.677061   12861 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:09:01.677109   12861 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:09:01.725324   12861 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:09:01.729581   12861 out.go:204]   - Generating certificates and keys ...
	I0318 04:09:01.729619   12861 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:09:01.729650   12861 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:09:01.729687   12861 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:09:01.729720   12861 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:09:01.729759   12861 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:09:01.729786   12861 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:09:01.729817   12861 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:09:01.729854   12861 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:09:01.729899   12861 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:09:01.729937   12861 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:09:01.729957   12861 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:09:01.729981   12861 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:09:01.814116   12861 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:09:01.957309   12861 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:09:01.993746   12861 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:09:02.156018   12861 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:09:02.188633   12861 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:09:02.189086   12861 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:09:02.189119   12861 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:09:02.271065   12861 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:09:02.279240   12861 out.go:204]   - Booting up control plane ...
	I0318 04:09:02.279293   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:09:02.279351   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:09:02.279468   12861 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:09:02.279568   12861 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:09:02.279665   12861 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:09:06.776469   12861 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501020 seconds
	I0318 04:09:06.776593   12861 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:09:06.780246   12861 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:09:07.287395   12861 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:09:07.287500   12861 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-484000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:09:07.792058   12861 kubeadm.go:309] [bootstrap-token] Using token: nbki6v.fi2n2wh3y67bjfsb
	I0318 04:09:07.797951   12861 out.go:204]   - Configuring RBAC rules ...
	I0318 04:09:07.798013   12861 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:09:07.798062   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:09:07.801554   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:09:07.802504   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:09:07.803363   12861 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:09:07.804196   12861 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:09:07.808543   12861 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:09:07.954953   12861 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:09:08.195595   12861 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:09:08.196032   12861 kubeadm.go:309] 
	I0318 04:09:08.196075   12861 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:09:08.196079   12861 kubeadm.go:309] 
	I0318 04:09:08.196120   12861 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:09:08.196127   12861 kubeadm.go:309] 
	I0318 04:09:08.196139   12861 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:09:08.196164   12861 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:09:08.196189   12861 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:09:08.196193   12861 kubeadm.go:309] 
	I0318 04:09:08.196223   12861 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:09:08.196227   12861 kubeadm.go:309] 
	I0318 04:09:08.196258   12861 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:09:08.196262   12861 kubeadm.go:309] 
	I0318 04:09:08.196289   12861 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:09:08.196325   12861 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:09:08.196379   12861 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:09:08.196382   12861 kubeadm.go:309] 
	I0318 04:09:08.196428   12861 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:09:08.196466   12861 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:09:08.196468   12861 kubeadm.go:309] 
	I0318 04:09:08.196510   12861 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nbki6v.fi2n2wh3y67bjfsb \
	I0318 04:09:08.196571   12861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 \
	I0318 04:09:08.196583   12861 kubeadm.go:309] 	--control-plane 
	I0318 04:09:08.196586   12861 kubeadm.go:309] 
	I0318 04:09:08.196647   12861 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:09:08.196651   12861 kubeadm.go:309] 
	I0318 04:09:08.196701   12861 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nbki6v.fi2n2wh3y67bjfsb \
	I0318 04:09:08.196758   12861 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8277259516d27b69cae5fa1851268f91d26ad55d9793249b2b30ee52eba5b372 
	I0318 04:09:08.196817   12861 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:09:08.196823   12861 cni.go:84] Creating CNI manager for ""
	I0318 04:09:08.196832   12861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:09:08.200945   12861 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:09:08.207956   12861 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:09:08.210878   12861 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:09:08.215602   12861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:09:08.215644   12861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:09:08.215676   12861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-484000 minikube.k8s.io/updated_at=2024_03_18T04_09_08_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1eefae2ca5be099618e61623fc53c8bbe0e383fd minikube.k8s.io/name=stopped-upgrade-484000 minikube.k8s.io/primary=true
	I0318 04:09:08.262702   12861 kubeadm.go:1107] duration metric: took 47.093083ms to wait for elevateKubeSystemPrivileges
	I0318 04:09:08.262722   12861 ops.go:34] apiserver oom_adj: -16
	W0318 04:09:08.262737   12861 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:09:08.262742   12861 kubeadm.go:393] duration metric: took 4m11.495364792s to StartCluster
	I0318 04:09:08.262752   12861 settings.go:142] acquiring lock: {Name:mk6b2370db0877d87697e89b229a47432418434b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:09:08.262835   12861 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:09:08.263228   12861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/kubeconfig: {Name:mk14a68d5fe20b060d9b3cac470478a530bf535b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:09:08.263686   12861 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:09:08.267742   12861 out.go:177] * Verifying Kubernetes components...
	I0318 04:09:08.263762   12861 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:09:08.263734   12861 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:09:08.267754   12861 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-484000"
	I0318 04:09:08.267769   12861 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-484000"
	W0318 04:09:08.267774   12861 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:09:08.267794   12861 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0318 04:09:08.267814   12861 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-484000"
	I0318 04:09:08.275894   12861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:09:08.275938   12861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-484000"
	I0318 04:09:08.279897   12861 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:09:08.281138   12861 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18431-10628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10256ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:09:08.283938   12861 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:09:08.283943   12861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:09:08.283950   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:09:08.284001   12861 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-484000"
	W0318 04:09:08.284008   12861 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:09:08.284018   12861 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0318 04:09:08.284751   12861 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:09:08.284756   12861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:09:08.284760   12861 sshutil.go:53] new ssh client: &{IP:localhost Port:52306 SSHKeyPath:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0318 04:09:08.364813   12861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:09:08.370749   12861 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:09:08.370794   12861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:09:08.374585   12861 api_server.go:72] duration metric: took 110.889666ms to wait for apiserver process to appear ...
	I0318 04:09:08.374593   12861 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:09:08.374600   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:08.438452   12861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:09:08.440433   12861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:09:13.376626   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:13.376675   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:18.376948   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:18.377002   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:23.377273   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:23.377331   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:28.377747   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:28.377770   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:33.378322   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:33.378381   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:38.379044   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:38.379075   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:09:38.827659   12861 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:09:38.832003   12861 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:09:38.841031   12861 addons.go:505] duration metric: took 30.5780685s for enable addons: enabled=[storage-provisioner]
	I0318 04:09:43.379933   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:43.380016   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:48.381147   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:48.381175   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:53.382618   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:53.382648   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:09:58.384696   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:09:58.384725   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:03.386797   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:03.386847   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:08.383325   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:08.383437   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:08.394902   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:08.394976   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:08.405838   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:08.405917   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:08.418165   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:08.418238   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:08.428989   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:08.429065   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:08.439589   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:08.439663   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:08.450799   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:08.450877   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:08.461415   12861 logs.go:276] 0 containers: []
	W0318 04:10:08.461426   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:08.461507   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:08.471997   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:08.472014   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:08.472021   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:08.490186   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:08.490196   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:08.502194   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:08.502208   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:08.539344   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:08.539368   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:08.544581   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:08.544592   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:08.584004   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:08.584018   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:08.599045   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:08.599061   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:08.611987   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:08.612000   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:08.626206   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:08.626219   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:08.651549   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:08.651562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:08.664508   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:08.664521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:08.680502   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:08.680515   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:08.694911   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:08.694922   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:11.217325   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:16.211836   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:16.212055   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:16.228540   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:16.228621   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:16.240157   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:16.240230   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:16.251040   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:16.251105   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:16.261348   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:16.261411   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:16.271555   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:16.271628   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:16.281587   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:16.281651   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:16.291812   12861 logs.go:276] 0 containers: []
	W0318 04:10:16.291824   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:16.291886   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:16.307461   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:16.307479   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:16.307484   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:16.318556   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:16.318569   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:16.340813   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:16.340825   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:16.353170   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:16.353183   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:16.378316   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:16.378336   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:16.414218   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:16.414236   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:16.419201   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:16.419211   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:16.457549   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:16.457562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:16.472202   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:16.472213   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:16.485509   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:16.485521   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:16.502163   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:16.502174   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:16.515352   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:16.515361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:16.531310   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:16.531322   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:19.043659   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:24.040662   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:24.040892   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:24.062636   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:24.062734   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:24.076610   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:24.076691   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:24.087829   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:24.087897   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:24.098173   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:24.098241   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:24.108748   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:24.108819   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:24.119215   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:24.119289   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:24.129528   12861 logs.go:276] 0 containers: []
	W0318 04:10:24.129539   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:24.129599   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:24.140182   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:24.140198   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:24.140204   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:24.175356   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:24.175372   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:24.189941   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:24.189951   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:24.208991   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:24.209007   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:24.221653   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:24.221665   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:24.237768   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:24.237779   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:24.251824   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:24.251836   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:24.264123   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:24.264136   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:24.269148   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:24.269159   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:24.307294   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:24.307308   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:24.321953   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:24.321969   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:24.338008   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:24.338019   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:24.363792   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:24.363810   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:26.876806   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:31.876175   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:31.876329   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:31.892212   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:31.892296   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:31.904475   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:31.904552   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:31.915701   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:31.915773   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:31.925973   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:31.926037   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:31.936314   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:31.936380   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:31.947134   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:31.947202   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:31.957206   12861 logs.go:276] 0 containers: []
	W0318 04:10:31.957219   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:31.957274   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:31.967794   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:31.967811   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:31.967816   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:31.978945   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:31.978956   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:32.013871   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:32.013880   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:32.019579   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:32.019591   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:32.058494   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:32.058507   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:32.078025   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:32.078036   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:32.093623   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:32.093641   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:32.119013   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:32.119025   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:32.130934   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:32.130949   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:32.144180   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:32.144192   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:32.156796   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:32.156809   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:32.172696   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:32.172707   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:32.187185   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:32.187205   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:34.706686   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:39.707122   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:39.707240   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:39.718813   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:39.718890   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:39.729474   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:39.729539   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:39.739888   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:39.739954   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:39.750293   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:39.750360   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:39.760918   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:39.760988   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:39.771375   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:39.771445   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:39.781411   12861 logs.go:276] 0 containers: []
	W0318 04:10:39.781424   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:39.781482   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:39.791834   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:39.791850   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:39.791855   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:39.803390   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:39.803401   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:39.820467   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:39.820480   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:39.857383   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:39.857401   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:39.896025   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:39.896036   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:39.913451   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:39.913463   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:39.926047   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:39.926058   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:39.938019   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:39.938031   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:39.953007   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:39.953019   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:39.977677   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:39.977692   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:39.990559   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:39.990570   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:39.995334   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:39.995346   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:40.013251   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:40.013263   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:42.527351   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:47.528541   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:47.528722   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:47.541461   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:47.541545   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:47.557156   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:47.557223   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:47.567613   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:47.567683   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:47.578972   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:47.579041   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:47.589590   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:47.589663   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:47.600107   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:47.600177   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:47.615946   12861 logs.go:276] 0 containers: []
	W0318 04:10:47.615958   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:47.616018   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:47.630103   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:47.630118   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:47.630123   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:47.635322   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:47.635331   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:47.673013   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:47.673024   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:47.688414   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:47.688427   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:47.700998   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:47.701010   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:47.716305   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:47.716323   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:47.728926   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:47.728935   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:47.740954   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:47.740963   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:47.777005   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:47.777017   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:47.796753   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:47.796770   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:47.809320   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:47.809334   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:47.828074   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:47.828086   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:47.846353   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:47.846366   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:50.375000   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:10:55.376520   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:10:55.376702   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:10:55.388134   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:10:55.388218   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:10:55.398985   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:10:55.399052   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:10:55.410427   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:10:55.410495   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:10:55.420964   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:10:55.421026   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:10:55.433183   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:10:55.433257   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:10:55.446015   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:10:55.446087   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:10:55.456579   12861 logs.go:276] 0 containers: []
	W0318 04:10:55.456591   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:10:55.456652   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:10:55.470019   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:10:55.470034   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:10:55.470040   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:10:55.481478   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:10:55.481489   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:10:55.498699   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:10:55.498710   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:10:55.534919   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:10:55.534936   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:10:55.539995   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:10:55.540005   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:10:55.560249   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:10:55.560261   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:10:55.574042   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:10:55.574052   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:10:55.592501   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:10:55.592514   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:10:55.610280   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:10:55.610296   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:10:55.635087   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:10:55.635105   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:10:55.647959   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:10:55.647973   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:10:55.686487   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:10:55.686499   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:10:55.702446   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:10:55.702460   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:10:58.216993   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:03.218703   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:03.219007   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:03.236861   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:03.236954   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:03.250307   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:03.250381   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:03.261880   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:03.261953   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:03.272199   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:03.272268   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:03.282691   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:03.282762   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:03.300788   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:03.300860   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:03.311582   12861 logs.go:276] 0 containers: []
	W0318 04:11:03.311600   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:03.311658   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:03.322460   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:03.322477   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:03.322482   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:03.359100   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:03.359112   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:03.396435   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:03.396446   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:03.412003   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:03.412018   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:03.427341   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:03.427355   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:03.440596   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:03.440609   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:03.453107   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:03.453122   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:03.466043   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:03.466054   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:03.479079   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:03.479092   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:03.504477   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:03.504491   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:03.520126   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:03.520139   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:03.525236   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:03.525244   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:03.541057   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:03.541069   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:06.061287   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:11.063242   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:11.063418   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:11.079638   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:11.079719   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:11.096462   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:11.096537   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:11.107857   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:11.107932   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:11.121720   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:11.121799   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:11.132794   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:11.132862   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:11.148042   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:11.148113   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:11.158925   12861 logs.go:276] 0 containers: []
	W0318 04:11:11.158940   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:11.159006   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:11.169855   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:11.169875   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:11.169881   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:11.204023   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:11.204032   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:11.218124   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:11.218134   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:11.232416   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:11.232426   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:11.247855   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:11.247865   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:11.260312   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:11.260323   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:11.272686   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:11.272698   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:11.285897   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:11.285908   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:11.323684   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:11.323701   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:11.328441   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:11.328454   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:11.340952   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:11.340965   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:11.361050   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:11.361062   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:11.381060   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:11.381078   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:13.908239   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:18.909726   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:18.909928   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:18.922590   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:18.922674   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:18.933351   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:18.933422   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:18.947466   12861 logs.go:276] 2 containers: [6d6ceecff203 98858ec38534]
	I0318 04:11:18.947538   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:18.957931   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:18.958001   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:18.968936   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:18.969004   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:18.979471   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:18.979550   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:18.989173   12861 logs.go:276] 0 containers: []
	W0318 04:11:18.989185   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:18.989240   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:18.999304   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:18.999317   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:18.999322   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:19.013163   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:19.013175   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:19.024739   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:19.024750   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:19.050065   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:19.050076   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:19.061975   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:19.061986   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:19.097215   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:19.097224   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:19.102175   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:19.102184   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:19.114840   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:19.114854   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:19.130888   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:19.130901   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:19.144011   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:19.144025   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:19.162627   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:19.162640   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:19.175695   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:19.175708   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:19.229549   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:19.229562   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:21.774911   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:26.776919   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:26.777035   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:26.791460   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:26.791681   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:26.804143   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:26.804199   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:26.819911   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:26.819968   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:26.830479   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:26.830531   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:26.843329   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:26.843399   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:26.853481   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:26.853533   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:26.867192   12861 logs.go:276] 0 containers: []
	W0318 04:11:26.867203   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:26.867264   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:26.877560   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:26.877577   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:26.877581   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:26.912552   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:26.912560   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:26.929398   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:26.929413   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:26.941281   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:26.941291   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:26.952558   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:26.952572   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:26.957408   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:26.957416   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:26.973439   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:26.973449   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:26.993143   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:26.993159   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:27.031191   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:27.031202   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:27.047798   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:27.047818   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:27.064132   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:27.064144   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:27.076243   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:27.076253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:27.088787   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:27.088799   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:27.101494   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:27.101505   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:27.117470   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:27.117485   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:29.646837   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:34.647085   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:34.647363   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:34.672896   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:34.673005   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:34.691128   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:34.691204   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:34.704975   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:34.705055   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:34.715886   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:34.715963   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:34.725934   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:34.726008   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:34.736461   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:34.736530   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:34.751008   12861 logs.go:276] 0 containers: []
	W0318 04:11:34.751018   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:34.751075   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:34.762883   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:34.762901   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:34.762906   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:34.798643   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:34.798663   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:34.821880   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:34.821896   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:34.836120   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:34.836131   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:34.849186   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:34.849199   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:34.865432   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:34.865444   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:34.878095   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:34.878107   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:34.890803   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:34.890814   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:34.916838   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:34.916866   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:34.922562   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:34.922574   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:34.962359   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:34.962368   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:34.980744   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:34.980755   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:34.993824   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:34.993835   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:35.012758   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:35.012768   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:35.027252   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:35.027269   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:37.546759   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:42.547532   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:42.547751   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:42.564395   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:42.564485   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:42.577751   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:42.577830   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:42.589007   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:42.589078   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:42.610408   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:42.610478   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:42.621359   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:42.621421   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:42.633242   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:42.633315   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:42.643559   12861 logs.go:276] 0 containers: []
	W0318 04:11:42.643568   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:42.643623   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:42.653803   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:42.653822   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:42.653827   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:42.665899   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:42.665912   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:42.681123   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:42.681133   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:42.716999   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:42.717011   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:42.732815   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:42.732826   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:42.745767   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:42.745780   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:42.758723   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:42.758733   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:42.763243   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:42.763253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:42.777395   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:42.777410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:42.793010   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:42.793019   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:42.812015   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:42.812029   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:42.826278   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:42.826291   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:42.863033   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:42.863052   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:42.876300   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:42.876311   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:42.889060   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:42.889072   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:45.417078   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:50.419255   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:50.419625   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:50.457525   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:50.457659   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:50.475332   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:50.475433   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:50.489050   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:50.489146   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:50.500946   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:50.501020   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:50.511365   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:50.511434   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:50.527955   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:50.528034   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:50.538512   12861 logs.go:276] 0 containers: []
	W0318 04:11:50.538526   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:50.538584   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:50.549338   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:50.549355   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:50.549361   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:50.574064   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:50.574075   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:50.585853   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:50.585867   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:50.598031   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:50.598044   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:50.626193   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:50.626208   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:50.663705   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:50.663728   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:50.678293   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:50.678304   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:50.691240   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:50.691252   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:50.695902   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:50.695913   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:50.708831   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:50.708843   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:50.731481   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:50.731494   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:50.748661   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:50.748674   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:11:50.764088   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:50.764100   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:50.784365   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:50.784381   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:50.799830   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:50.799842   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:53.349604   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:11:58.351667   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:11:58.351814   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:11:58.363545   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:11:58.363621   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:11:58.374159   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:11:58.374233   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:11:58.385001   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:11:58.385083   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:11:58.395769   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:11:58.395849   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:11:58.406941   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:11:58.407013   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:11:58.420644   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:11:58.420714   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:11:58.431335   12861 logs.go:276] 0 containers: []
	W0318 04:11:58.431347   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:11:58.431408   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:11:58.443348   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:11:58.443364   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:11:58.443369   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:11:58.455123   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:11:58.455132   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:11:58.473316   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:11:58.473328   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:11:58.486352   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:11:58.486365   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:11:58.499199   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:11:58.499211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:11:58.515444   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:11:58.515458   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:11:58.552847   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:11:58.552860   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:11:58.568980   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:11:58.568993   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:11:58.581203   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:11:58.581215   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:11:58.594582   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:11:58.594592   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:11:58.609322   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:11:58.609329   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:11:58.635919   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:11:58.635929   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:11:58.652817   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:11:58.652829   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:11:58.690799   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:11:58.690810   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:11:58.695596   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:11:58.695607   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:01.213024   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:06.215179   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:06.215313   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:06.226646   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:06.226710   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:06.246319   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:06.246397   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:06.262669   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:06.262748   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:06.282823   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:06.282900   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:06.298846   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:06.298909   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:06.309420   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:06.309492   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:06.320617   12861 logs.go:276] 0 containers: []
	W0318 04:12:06.320633   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:06.320694   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:06.331904   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:06.331925   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:06.331931   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:06.358678   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:06.358693   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:06.398056   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:06.398070   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:06.417327   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:06.417345   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:06.430658   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:06.430669   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:06.450780   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:06.450790   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:06.463679   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:06.463690   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:06.483743   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:06.483755   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:06.497110   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:06.498410   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:06.516296   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:06.516309   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:06.529091   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:06.529101   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:06.548783   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:06.548797   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:06.564201   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:06.564212   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:06.580892   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:06.580902   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:06.616564   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:06.616580   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:09.123484   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:14.125537   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:14.125726   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:14.141593   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:14.141686   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:14.153678   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:14.153744   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:14.164642   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:14.164704   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:14.182749   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:14.182809   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:14.193958   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:14.194026   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:14.205902   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:14.205977   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:14.216966   12861 logs.go:276] 0 containers: []
	W0318 04:12:14.216977   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:14.217038   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:14.232136   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:14.232157   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:14.232163   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:14.247041   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:14.247050   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:14.263908   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:14.263918   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:14.301243   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:14.301253   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:14.320942   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:14.320951   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:14.333590   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:14.333600   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:14.350051   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:14.350063   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:14.363340   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:14.363351   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:14.401066   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:14.401079   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:14.424053   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:14.424066   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:14.450447   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:14.450461   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:14.463166   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:14.463181   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:14.475948   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:14.475960   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:14.489628   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:14.489643   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:14.502517   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:14.502530   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:17.008817   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:22.010183   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:22.010427   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:22.034137   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:22.034221   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:22.050802   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:22.050876   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:22.062940   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:22.063012   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:22.073582   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:22.073660   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:22.085336   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:22.085441   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:22.106977   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:22.107045   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:22.118781   12861 logs.go:276] 0 containers: []
	W0318 04:12:22.118792   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:22.118850   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:22.130557   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:22.130574   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:22.130580   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:22.147019   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:22.147028   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:22.159352   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:22.159362   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:22.196196   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:22.196206   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:22.201007   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:22.201019   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:22.213970   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:22.213982   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:22.251579   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:22.251589   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:22.279871   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:22.279886   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:22.295357   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:22.295367   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:22.312057   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:22.312069   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:22.331489   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:22.331502   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:22.344263   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:22.344274   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:22.357725   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:22.357734   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:22.400252   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:22.400265   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:22.416507   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:22.416520   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:24.931405   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:29.933397   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:29.933719   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:29.964603   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:29.964718   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:29.983760   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:29.983850   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:30.006004   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:30.006161   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:30.022759   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:30.022866   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:30.038479   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:30.038557   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:30.050642   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:30.050705   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:30.063051   12861 logs.go:276] 0 containers: []
	W0318 04:12:30.063061   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:30.063106   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:30.074558   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:30.074575   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:30.074580   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:30.091200   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:30.091211   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:30.110540   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:30.110551   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:30.146944   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:30.146956   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:30.172679   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:30.172691   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:30.184052   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:30.184066   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:30.198404   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:30.198418   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:30.210855   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:30.210868   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:30.224710   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:30.224725   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:30.240110   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:30.240121   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:30.245232   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:30.245241   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:30.258594   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:30.258603   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:30.270318   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:30.270333   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:30.282283   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:30.282297   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:30.298523   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:30.298533   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:32.834535   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:37.835760   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:37.835934   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:37.856105   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:37.856205   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:37.871153   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:37.871231   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:37.882833   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:37.882904   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:37.893324   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:37.893392   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:37.905009   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:37.905077   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:37.916240   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:37.916305   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:37.927054   12861 logs.go:276] 0 containers: []
	W0318 04:12:37.927068   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:37.927132   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:37.937914   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:37.937931   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:37.937937   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:37.949837   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:37.949853   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:37.965118   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:37.965128   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:37.976261   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:37.976272   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:38.007361   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:38.007376   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:38.021698   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:38.021709   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:38.039978   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:38.039989   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:38.051377   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:38.051387   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:38.076185   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:38.076194   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:38.111273   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:38.111284   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:38.146766   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:38.146777   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:38.159777   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:38.159788   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:38.171693   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:38.171704   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:38.176571   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:38.176581   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:38.188230   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:38.188241   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:40.702389   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:45.704536   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:45.704662   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:45.715973   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:45.716053   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:45.727429   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:45.727496   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:45.740361   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:45.740416   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:45.752876   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:45.752936   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:45.764293   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:45.764350   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:45.775112   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:45.775170   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:45.785200   12861 logs.go:276] 0 containers: []
	W0318 04:12:45.785209   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:45.785259   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:45.799330   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:45.799353   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:45.799359   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:45.817690   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:45.817710   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:45.841599   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:45.841628   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:45.863332   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:45.863348   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:45.891469   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:45.891496   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:45.930038   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:45.930056   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:45.934876   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:45.934883   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:45.949609   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:45.949620   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:45.961940   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:45.961952   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:45.974291   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:45.974305   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:45.988067   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:45.988079   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:46.002557   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:46.002569   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:46.017627   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:46.017638   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:46.030643   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:46.030652   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:46.048766   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:46.048775   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:48.586895   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:12:53.589031   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:12:53.589220   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:12:53.607314   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:12:53.607403   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:12:53.620779   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:12:53.620856   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:12:53.641644   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:12:53.641720   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:12:53.652116   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:12:53.652185   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:12:53.662577   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:12:53.662647   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:12:53.673428   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:12:53.673498   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:12:53.683533   12861 logs.go:276] 0 containers: []
	W0318 04:12:53.683545   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:12:53.683605   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:12:53.693931   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:12:53.693948   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:12:53.693955   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:12:53.729572   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:12:53.729582   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:12:53.740854   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:12:53.740871   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:12:53.756846   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:12:53.756861   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:12:53.781677   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:12:53.781686   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:12:53.792594   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:12:53.792609   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:12:53.827768   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:12:53.827779   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:12:53.842424   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:12:53.842440   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:12:53.854335   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:12:53.854345   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:12:53.866340   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:12:53.866350   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:12:53.871214   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:12:53.871223   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:12:53.882679   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:12:53.882690   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:12:53.896373   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:12:53.896383   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:12:53.907879   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:12:53.907890   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:12:53.925619   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:12:53.925631   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:12:56.447812   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:13:01.449059   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:13:01.449362   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:13:01.471881   12861 logs.go:276] 1 containers: [f2ee068d2845]
	I0318 04:13:01.471986   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:13:01.493511   12861 logs.go:276] 1 containers: [abedea882fa0]
	I0318 04:13:01.497734   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:13:01.509691   12861 logs.go:276] 4 containers: [6aa370cd9f4c bd5f69f4e55f 6d6ceecff203 98858ec38534]
	I0318 04:13:01.509764   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:13:01.521097   12861 logs.go:276] 1 containers: [43d353ff2f44]
	I0318 04:13:01.521170   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:13:01.531480   12861 logs.go:276] 1 containers: [82cabce52e1f]
	I0318 04:13:01.531543   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:13:01.541669   12861 logs.go:276] 1 containers: [0921a24d5f07]
	I0318 04:13:01.541731   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:13:01.551392   12861 logs.go:276] 0 containers: []
	W0318 04:13:01.551404   12861 logs.go:278] No container was found matching "kindnet"
	I0318 04:13:01.551466   12861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:13:01.561716   12861 logs.go:276] 1 containers: [14437573f440]
	I0318 04:13:01.561733   12861 logs.go:123] Gathering logs for coredns [6aa370cd9f4c] ...
	I0318 04:13:01.561738   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa370cd9f4c"
	I0318 04:13:01.577401   12861 logs.go:123] Gathering logs for coredns [6d6ceecff203] ...
	I0318 04:13:01.577413   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d6ceecff203"
	I0318 04:13:01.589043   12861 logs.go:123] Gathering logs for coredns [98858ec38534] ...
	I0318 04:13:01.589055   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98858ec38534"
	I0318 04:13:01.603653   12861 logs.go:123] Gathering logs for kube-scheduler [43d353ff2f44] ...
	I0318 04:13:01.603666   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43d353ff2f44"
	I0318 04:13:01.622741   12861 logs.go:123] Gathering logs for Docker ...
	I0318 04:13:01.622751   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:13:01.647594   12861 logs.go:123] Gathering logs for container status ...
	I0318 04:13:01.647604   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:13:01.659462   12861 logs.go:123] Gathering logs for kube-proxy [82cabce52e1f] ...
	I0318 04:13:01.659474   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cabce52e1f"
	I0318 04:13:01.671056   12861 logs.go:123] Gathering logs for storage-provisioner [14437573f440] ...
	I0318 04:13:01.671069   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14437573f440"
	I0318 04:13:01.682783   12861 logs.go:123] Gathering logs for kubelet ...
	I0318 04:13:01.682798   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:13:01.718514   12861 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:13:01.718526   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:13:01.755010   12861 logs.go:123] Gathering logs for kube-apiserver [f2ee068d2845] ...
	I0318 04:13:01.755023   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2ee068d2845"
	I0318 04:13:01.769515   12861 logs.go:123] Gathering logs for etcd [abedea882fa0] ...
	I0318 04:13:01.769526   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abedea882fa0"
	I0318 04:13:01.789904   12861 logs.go:123] Gathering logs for coredns [bd5f69f4e55f] ...
	I0318 04:13:01.789917   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd5f69f4e55f"
	I0318 04:13:01.801724   12861 logs.go:123] Gathering logs for dmesg ...
	I0318 04:13:01.801737   12861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:13:01.805791   12861 logs.go:123] Gathering logs for kube-controller-manager [0921a24d5f07] ...
	I0318 04:13:01.805798   12861 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0921a24d5f07"
	I0318 04:13:04.325323   12861 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:13:09.327799   12861 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:13:09.333234   12861 out.go:177] 
	W0318 04:13:09.341493   12861 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:13:09.341528   12861 out.go:239] * 
	* 
	W0318 04:13:09.342829   12861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:13:09.354118   12861 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (636.84s)

                                                
                                    
x
+
TestPause/serial/Start (10.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.075134875s)

                                                
                                                
-- stdout --
	* [pause-907000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-907000" primary control-plane node in "pause-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-907000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-907000 -n pause-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-907000 -n pause-907000: exit status 7 (59.72225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 : exit status 80 (9.967556292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-112000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-112000" primary control-plane node in "NoKubernetes-112000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-112000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000: exit status 7 (69.370375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 : exit status 80 (5.844807334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-112000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-112000
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-112000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000: exit status 7 (70.660041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 : exit status 80 (5.836669875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-112000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-112000
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-112000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000: exit status 7 (31.997959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.87s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.54s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18431
- KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1460840836/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 : exit status 80 (5.9015385s)

                                                
                                                
-- stdout --
	* [NoKubernetes-112000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-112000
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-112000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-112000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-112000 -n NoKubernetes-112000: exit status 7 (70.813209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.889785042s)

                                                
                                                
-- stdout --
	* [auto-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-466000" primary control-plane node in "auto-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:14:33.431491   13274 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:14:33.431622   13274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:33.431626   13274 out.go:304] Setting ErrFile to fd 2...
	I0318 04:14:33.431628   13274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:33.431755   13274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:14:33.432824   13274 out.go:298] Setting JSON to false
	I0318 04:14:33.449218   13274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8046,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:14:33.449279   13274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:14:33.456076   13274 out.go:177] * [auto-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:14:33.469023   13274 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:14:33.464052   13274 notify.go:220] Checking for updates...
	I0318 04:14:33.475016   13274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:14:33.478042   13274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:14:33.479479   13274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:14:33.482967   13274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:14:33.486046   13274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:14:33.489466   13274 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:33.489534   13274 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:33.489588   13274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:14:33.493990   13274 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:14:33.501054   13274 start.go:297] selected driver: qemu2
	I0318 04:14:33.501060   13274 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:14:33.501065   13274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:14:33.503416   13274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:14:33.507034   13274 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:14:33.510047   13274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:14:33.510090   13274 cni.go:84] Creating CNI manager for ""
	I0318 04:14:33.510098   13274 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:14:33.510109   13274 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:14:33.510147   13274 start.go:340] cluster config:
	{Name:auto-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:14:33.514837   13274 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:14:33.522037   13274 out.go:177] * Starting "auto-466000" primary control-plane node in "auto-466000" cluster
	I0318 04:14:33.526054   13274 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:14:33.526072   13274 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:14:33.526081   13274 cache.go:56] Caching tarball of preloaded images
	I0318 04:14:33.526148   13274 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:14:33.526154   13274 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:14:33.526221   13274 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/auto-466000/config.json ...
	I0318 04:14:33.526233   13274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/auto-466000/config.json: {Name:mkb6fffef9b2cabe973463cc19ad73147a979c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:14:33.526457   13274 start.go:360] acquireMachinesLock for auto-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:33.526490   13274 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "auto-466000"
	I0318 04:14:33.526504   13274 start.go:93] Provisioning new machine with config: &{Name:auto-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:33.526532   13274 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:33.534036   13274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:14:33.551825   13274 start.go:159] libmachine.API.Create for "auto-466000" (driver="qemu2")
	I0318 04:14:33.551859   13274 client.go:168] LocalClient.Create starting
	I0318 04:14:33.551924   13274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:33.551953   13274 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:33.551962   13274 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:33.552014   13274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:33.552037   13274 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:33.552045   13274 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:33.552536   13274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:33.693883   13274 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:33.751939   13274 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:33.751947   13274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:33.752148   13274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:33.764323   13274 main.go:141] libmachine: STDOUT: 
	I0318 04:14:33.764344   13274 main.go:141] libmachine: STDERR: 
	I0318 04:14:33.764404   13274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2 +20000M
	I0318 04:14:33.775368   13274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:33.775385   13274 main.go:141] libmachine: STDERR: 
	I0318 04:14:33.775399   13274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:33.775411   13274 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:33.775444   13274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:93:6b:ec:3b:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:33.777182   13274 main.go:141] libmachine: STDOUT: 
	I0318 04:14:33.777197   13274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:33.777226   13274 client.go:171] duration metric: took 225.368625ms to LocalClient.Create
	I0318 04:14:35.779405   13274 start.go:128] duration metric: took 2.252922042s to createHost
	I0318 04:14:35.779472   13274 start.go:83] releasing machines lock for "auto-466000", held for 2.253045459s
	W0318 04:14:35.779552   13274 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:35.793615   13274 out.go:177] * Deleting "auto-466000" in qemu2 ...
	W0318 04:14:35.821697   13274 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:35.821727   13274 start.go:728] Will try again in 5 seconds ...
	I0318 04:14:40.823771   13274 start.go:360] acquireMachinesLock for auto-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:40.824277   13274 start.go:364] duration metric: took 379.083µs to acquireMachinesLock for "auto-466000"
	I0318 04:14:40.824380   13274 start.go:93] Provisioning new machine with config: &{Name:auto-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:40.824609   13274 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:40.835210   13274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:14:40.884577   13274 start.go:159] libmachine.API.Create for "auto-466000" (driver="qemu2")
	I0318 04:14:40.884624   13274 client.go:168] LocalClient.Create starting
	I0318 04:14:40.884727   13274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:40.884792   13274 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:40.884813   13274 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:40.884873   13274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:40.884922   13274 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:40.884936   13274 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:40.885434   13274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:41.035184   13274 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:41.214874   13274 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:41.214884   13274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:41.215093   13274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:41.227787   13274 main.go:141] libmachine: STDOUT: 
	I0318 04:14:41.227810   13274 main.go:141] libmachine: STDERR: 
	I0318 04:14:41.227875   13274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2 +20000M
	I0318 04:14:41.238484   13274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:41.238511   13274 main.go:141] libmachine: STDERR: 
	I0318 04:14:41.238526   13274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:41.238532   13274 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:41.238568   13274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:9b:cf:b3:a3:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/auto-466000/disk.qcow2
	I0318 04:14:41.240320   13274 main.go:141] libmachine: STDOUT: 
	I0318 04:14:41.240334   13274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:41.240348   13274 client.go:171] duration metric: took 355.729209ms to LocalClient.Create
	I0318 04:14:43.242457   13274 start.go:128] duration metric: took 2.417898875s to createHost
	I0318 04:14:43.242517   13274 start.go:83] releasing machines lock for "auto-466000", held for 2.418294084s
	W0318 04:14:43.242906   13274 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:43.258723   13274 out.go:177] 
	W0318 04:14:43.261669   13274 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:14:43.261698   13274 out.go:239] * 
	* 
	W0318 04:14:43.264611   13274 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:14:43.275611   13274 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.837141292s)

                                                
                                                
-- stdout --
	* [kindnet-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-466000" primary control-plane node in "kindnet-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:14:45.604631   13388 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:14:45.604751   13388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:45.604754   13388 out.go:304] Setting ErrFile to fd 2...
	I0318 04:14:45.604757   13388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:45.604888   13388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:14:45.606006   13388 out.go:298] Setting JSON to false
	I0318 04:14:45.622449   13388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8058,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:14:45.622512   13388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:14:45.628265   13388 out.go:177] * [kindnet-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:14:45.636209   13388 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:14:45.636248   13388 notify.go:220] Checking for updates...
	I0318 04:14:45.643163   13388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:14:45.646215   13388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:14:45.649169   13388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:14:45.652100   13388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:14:45.655148   13388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:14:45.658535   13388 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:45.658602   13388 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:45.658666   13388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:14:45.662132   13388 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:14:45.669181   13388 start.go:297] selected driver: qemu2
	I0318 04:14:45.669187   13388 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:14:45.669193   13388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:14:45.671460   13388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:14:45.672950   13388 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:14:45.676247   13388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:14:45.676286   13388 cni.go:84] Creating CNI manager for "kindnet"
	I0318 04:14:45.676290   13388 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:14:45.676332   13388 start.go:340] cluster config:
	{Name:kindnet-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:14:45.680700   13388 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:14:45.688167   13388 out.go:177] * Starting "kindnet-466000" primary control-plane node in "kindnet-466000" cluster
	I0318 04:14:45.692164   13388 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:14:45.692181   13388 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:14:45.692192   13388 cache.go:56] Caching tarball of preloaded images
	I0318 04:14:45.692251   13388 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:14:45.692260   13388 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:14:45.692325   13388 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kindnet-466000/config.json ...
	I0318 04:14:45.692337   13388 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kindnet-466000/config.json: {Name:mk37f09288bfe43023e16645fca55bc25d04b212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:14:45.692570   13388 start.go:360] acquireMachinesLock for kindnet-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:45.692604   13388 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "kindnet-466000"
	I0318 04:14:45.692622   13388 start.go:93] Provisioning new machine with config: &{Name:kindnet-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:45.692660   13388 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:45.700162   13388 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:14:45.718258   13388 start.go:159] libmachine.API.Create for "kindnet-466000" (driver="qemu2")
	I0318 04:14:45.718291   13388 client.go:168] LocalClient.Create starting
	I0318 04:14:45.718348   13388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:45.718377   13388 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:45.718387   13388 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:45.718436   13388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:45.718459   13388 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:45.718470   13388 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:45.718907   13388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:45.860594   13388 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:45.969661   13388 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:45.969668   13388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:45.969843   13388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:45.982334   13388 main.go:141] libmachine: STDOUT: 
	I0318 04:14:45.982351   13388 main.go:141] libmachine: STDERR: 
	I0318 04:14:45.982404   13388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2 +20000M
	I0318 04:14:45.993077   13388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:45.993103   13388 main.go:141] libmachine: STDERR: 
	I0318 04:14:45.993127   13388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:45.993132   13388 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:45.993162   13388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:00:74:8e:18:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:45.994941   13388 main.go:141] libmachine: STDOUT: 
	I0318 04:14:45.994955   13388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:45.994982   13388 client.go:171] duration metric: took 276.6945ms to LocalClient.Create
	I0318 04:14:47.995279   13388 start.go:128] duration metric: took 2.302671667s to createHost
	I0318 04:14:47.995357   13388 start.go:83] releasing machines lock for "kindnet-466000", held for 2.302820916s
	W0318 04:14:47.995429   13388 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:48.011560   13388 out.go:177] * Deleting "kindnet-466000" in qemu2 ...
	W0318 04:14:48.037853   13388 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:48.037908   13388 start.go:728] Will try again in 5 seconds ...
	I0318 04:14:53.040076   13388 start.go:360] acquireMachinesLock for kindnet-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:53.040466   13388 start.go:364] duration metric: took 296.333µs to acquireMachinesLock for "kindnet-466000"
	I0318 04:14:53.040593   13388 start.go:93] Provisioning new machine with config: &{Name:kindnet-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:53.040883   13388 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:53.050279   13388 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:14:53.099843   13388 start.go:159] libmachine.API.Create for "kindnet-466000" (driver="qemu2")
	I0318 04:14:53.099900   13388 client.go:168] LocalClient.Create starting
	I0318 04:14:53.099997   13388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:53.100061   13388 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:53.100075   13388 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:53.100144   13388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:53.100186   13388 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:53.100196   13388 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:53.100720   13388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:53.279012   13388 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:53.340336   13388 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:53.340341   13388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:53.340539   13388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:53.352808   13388 main.go:141] libmachine: STDOUT: 
	I0318 04:14:53.352832   13388 main.go:141] libmachine: STDERR: 
	I0318 04:14:53.352881   13388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2 +20000M
	I0318 04:14:53.363482   13388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:53.363500   13388 main.go:141] libmachine: STDERR: 
	I0318 04:14:53.363517   13388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:53.363522   13388 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:53.363559   13388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:3b:8d:63:65:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kindnet-466000/disk.qcow2
	I0318 04:14:53.365291   13388 main.go:141] libmachine: STDOUT: 
	I0318 04:14:53.365306   13388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:53.365318   13388 client.go:171] duration metric: took 265.422791ms to LocalClient.Create
	I0318 04:14:55.367475   13388 start.go:128] duration metric: took 2.326637042s to createHost
	I0318 04:14:55.367750   13388 start.go:83] releasing machines lock for "kindnet-466000", held for 2.327139083s
	W0318 04:14:55.368071   13388 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:14:55.377702   13388 out.go:177] 
	W0318 04:14:55.384792   13388 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:14:55.384818   13388 out.go:239] * 
	* 
	W0318 04:14:55.387499   13388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:14:55.396685   13388 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.7914685s)

                                                
                                                
-- stdout --
	* [calico-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-466000" primary control-plane node in "calico-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:14:57.792938   13509 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:14:57.793065   13509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:57.793069   13509 out.go:304] Setting ErrFile to fd 2...
	I0318 04:14:57.793071   13509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:14:57.793186   13509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:14:57.794243   13509 out.go:298] Setting JSON to false
	I0318 04:14:57.810316   13509 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8070,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:14:57.810383   13509 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:14:57.816178   13509 out.go:177] * [calico-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:14:57.824191   13509 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:14:57.827173   13509 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:14:57.824225   13509 notify.go:220] Checking for updates...
	I0318 04:14:57.833048   13509 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:14:57.837145   13509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:14:57.840142   13509 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:14:57.843074   13509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:14:57.846543   13509 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:57.846610   13509 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:14:57.846662   13509 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:14:57.851178   13509 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:14:57.862128   13509 start.go:297] selected driver: qemu2
	I0318 04:14:57.862134   13509 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:14:57.862139   13509 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:14:57.864528   13509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:14:57.868125   13509 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:14:57.871212   13509 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:14:57.871270   13509 cni.go:84] Creating CNI manager for "calico"
	I0318 04:14:57.871278   13509 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0318 04:14:57.871323   13509 start.go:340] cluster config:
	{Name:calico-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:14:57.875985   13509 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:14:57.883019   13509 out.go:177] * Starting "calico-466000" primary control-plane node in "calico-466000" cluster
	I0318 04:14:57.887159   13509 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:14:57.887176   13509 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:14:57.887185   13509 cache.go:56] Caching tarball of preloaded images
	I0318 04:14:57.887260   13509 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:14:57.887274   13509 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:14:57.887336   13509 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/calico-466000/config.json ...
	I0318 04:14:57.887348   13509 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/calico-466000/config.json: {Name:mk238668621e142b3f1b7287d8244a4d5b57b271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:14:57.887569   13509 start.go:360] acquireMachinesLock for calico-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:14:57.887602   13509 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "calico-466000"
	I0318 04:14:57.887616   13509 start.go:93] Provisioning new machine with config: &{Name:calico-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:14:57.887645   13509 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:14:57.895113   13509 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:14:57.912875   13509 start.go:159] libmachine.API.Create for "calico-466000" (driver="qemu2")
	I0318 04:14:57.912910   13509 client.go:168] LocalClient.Create starting
	I0318 04:14:57.912974   13509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:14:57.913003   13509 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:57.913017   13509 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:57.913068   13509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:14:57.913093   13509 main.go:141] libmachine: Decoding PEM data...
	I0318 04:14:57.913099   13509 main.go:141] libmachine: Parsing certificate...
	I0318 04:14:57.913430   13509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:14:58.054638   13509 main.go:141] libmachine: Creating SSH key...
	I0318 04:14:58.143157   13509 main.go:141] libmachine: Creating Disk image...
	I0318 04:14:58.143167   13509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:14:58.143358   13509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:14:58.155680   13509 main.go:141] libmachine: STDOUT: 
	I0318 04:14:58.155701   13509 main.go:141] libmachine: STDERR: 
	I0318 04:14:58.155752   13509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2 +20000M
	I0318 04:14:58.166575   13509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:14:58.166592   13509 main.go:141] libmachine: STDERR: 
	I0318 04:14:58.166603   13509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:14:58.166607   13509 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:14:58.166644   13509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:a7:d9:a1:cf:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:14:58.168447   13509 main.go:141] libmachine: STDOUT: 
	I0318 04:14:58.168463   13509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:14:58.168480   13509 client.go:171] duration metric: took 255.573041ms to LocalClient.Create
	I0318 04:15:00.169334   13509 start.go:128] duration metric: took 2.281718208s to createHost
	I0318 04:15:00.169443   13509 start.go:83] releasing machines lock for "calico-466000", held for 2.2819065s
	W0318 04:15:00.169518   13509 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:00.180827   13509 out.go:177] * Deleting "calico-466000" in qemu2 ...
	W0318 04:15:00.211300   13509 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:00.211333   13509 start.go:728] Will try again in 5 seconds ...
	I0318 04:15:05.213332   13509 start.go:360] acquireMachinesLock for calico-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:05.213740   13509 start.go:364] duration metric: took 300.958µs to acquireMachinesLock for "calico-466000"
	I0318 04:15:05.213883   13509 start.go:93] Provisioning new machine with config: &{Name:calico-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:05.214193   13509 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:05.223821   13509 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:05.272563   13509 start.go:159] libmachine.API.Create for "calico-466000" (driver="qemu2")
	I0318 04:15:05.272626   13509 client.go:168] LocalClient.Create starting
	I0318 04:15:05.272752   13509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:05.272833   13509 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:05.272850   13509 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:05.272954   13509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:05.273010   13509 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:05.273022   13509 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:05.273683   13509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:05.425809   13509 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:05.478772   13509 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:05.478777   13509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:05.478971   13509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:15:05.491362   13509 main.go:141] libmachine: STDOUT: 
	I0318 04:15:05.491381   13509 main.go:141] libmachine: STDERR: 
	I0318 04:15:05.491435   13509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2 +20000M
	I0318 04:15:05.502002   13509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:05.502022   13509 main.go:141] libmachine: STDERR: 
	I0318 04:15:05.502033   13509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:15:05.502038   13509 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:05.502070   13509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b4:86:c4:d6:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/calico-466000/disk.qcow2
	I0318 04:15:05.503753   13509 main.go:141] libmachine: STDOUT: 
	I0318 04:15:05.503770   13509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:05.503782   13509 client.go:171] duration metric: took 231.158833ms to LocalClient.Create
	I0318 04:15:07.505898   13509 start.go:128] duration metric: took 2.291749667s to createHost
	I0318 04:15:07.505978   13509 start.go:83] releasing machines lock for "calico-466000", held for 2.292290791s
	W0318 04:15:07.506345   13509 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:07.522143   13509 out.go:177] 
	W0318 04:15:07.526217   13509 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:15:07.526249   13509 out.go:239] * 
	* 
	W0318 04:15:07.528057   13509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:15:07.540074   13509 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.788046459s)

                                                
                                                
-- stdout --
	* [custom-flannel-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-466000" primary control-plane node in "custom-flannel-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:15:10.031314   13628 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:15:10.031445   13628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:10.031449   13628 out.go:304] Setting ErrFile to fd 2...
	I0318 04:15:10.031451   13628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:10.031569   13628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:15:10.032630   13628 out.go:298] Setting JSON to false
	I0318 04:15:10.048591   13628 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8083,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:15:10.048660   13628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:15:10.055273   13628 out.go:177] * [custom-flannel-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:15:10.063202   13628 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:15:10.066252   13628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:15:10.063242   13628 notify.go:220] Checking for updates...
	I0318 04:15:10.072162   13628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:15:10.075158   13628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:15:10.078113   13628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:15:10.081140   13628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:15:10.084499   13628 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:10.084571   13628 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:10.084624   13628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:15:10.089116   13628 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:15:10.096150   13628 start.go:297] selected driver: qemu2
	I0318 04:15:10.096156   13628 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:15:10.096161   13628 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:15:10.098381   13628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:15:10.101107   13628 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:15:10.104228   13628 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:15:10.104263   13628 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0318 04:15:10.104271   13628 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0318 04:15:10.104306   13628 start.go:340] cluster config:
	{Name:custom-flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:15:10.108811   13628 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:15:10.116132   13628 out.go:177] * Starting "custom-flannel-466000" primary control-plane node in "custom-flannel-466000" cluster
	I0318 04:15:10.120162   13628 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:15:10.120177   13628 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:15:10.120185   13628 cache.go:56] Caching tarball of preloaded images
	I0318 04:15:10.120255   13628 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:15:10.120261   13628 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:15:10.120343   13628 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/custom-flannel-466000/config.json ...
	I0318 04:15:10.120355   13628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/custom-flannel-466000/config.json: {Name:mk74871235a3dbdc959d88dcb9f87f984b98215f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:15:10.120584   13628 start.go:360] acquireMachinesLock for custom-flannel-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:10.120620   13628 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "custom-flannel-466000"
	I0318 04:15:10.120635   13628 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:10.120666   13628 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:10.125140   13628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:10.143565   13628 start.go:159] libmachine.API.Create for "custom-flannel-466000" (driver="qemu2")
	I0318 04:15:10.143592   13628 client.go:168] LocalClient.Create starting
	I0318 04:15:10.143649   13628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:10.143683   13628 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:10.143693   13628 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:10.143739   13628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:10.143761   13628 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:10.143768   13628 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:10.144200   13628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:10.284487   13628 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:10.388754   13628 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:10.388761   13628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:10.388954   13628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:10.400987   13628 main.go:141] libmachine: STDOUT: 
	I0318 04:15:10.401007   13628 main.go:141] libmachine: STDERR: 
	I0318 04:15:10.401066   13628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2 +20000M
	I0318 04:15:10.411848   13628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:10.411862   13628 main.go:141] libmachine: STDERR: 
	I0318 04:15:10.411874   13628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:10.411878   13628 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:10.411907   13628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4d:41:d5:e1:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:10.413706   13628 main.go:141] libmachine: STDOUT: 
	I0318 04:15:10.413733   13628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:10.413754   13628 client.go:171] duration metric: took 270.16525ms to LocalClient.Create
	I0318 04:15:12.415870   13628 start.go:128] duration metric: took 2.295254791s to createHost
	I0318 04:15:12.415918   13628 start.go:83] releasing machines lock for "custom-flannel-466000", held for 2.295365792s
	W0318 04:15:12.415993   13628 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:12.430895   13628 out.go:177] * Deleting "custom-flannel-466000" in qemu2 ...
	W0318 04:15:12.456454   13628 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:12.456484   13628 start.go:728] Will try again in 5 seconds ...
	I0318 04:15:17.457260   13628 start.go:360] acquireMachinesLock for custom-flannel-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:17.457705   13628 start.go:364] duration metric: took 361.833µs to acquireMachinesLock for "custom-flannel-466000"
	I0318 04:15:17.457848   13628 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:17.458143   13628 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:17.468740   13628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:17.517204   13628 start.go:159] libmachine.API.Create for "custom-flannel-466000" (driver="qemu2")
	I0318 04:15:17.517253   13628 client.go:168] LocalClient.Create starting
	I0318 04:15:17.517346   13628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:17.517400   13628 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:17.517419   13628 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:17.517489   13628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:17.517540   13628 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:17.517551   13628 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:17.518234   13628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:17.669843   13628 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:17.722522   13628 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:17.722528   13628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:17.722712   13628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:17.735166   13628 main.go:141] libmachine: STDOUT: 
	I0318 04:15:17.735226   13628 main.go:141] libmachine: STDERR: 
	I0318 04:15:17.735275   13628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2 +20000M
	I0318 04:15:17.745744   13628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:17.745798   13628 main.go:141] libmachine: STDERR: 
	I0318 04:15:17.745817   13628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:17.745826   13628 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:17.745854   13628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f4:c8:1f:6b:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/custom-flannel-466000/disk.qcow2
	I0318 04:15:17.747519   13628 main.go:141] libmachine: STDOUT: 
	I0318 04:15:17.747559   13628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:17.747573   13628 client.go:171] duration metric: took 230.322083ms to LocalClient.Create
	I0318 04:15:19.749681   13628 start.go:128] duration metric: took 2.291584875s to createHost
	I0318 04:15:19.749730   13628 start.go:83] releasing machines lock for "custom-flannel-466000", held for 2.29206025s
	W0318 04:15:19.750081   13628 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:19.759695   13628 out.go:177] 
	W0318 04:15:19.765695   13628 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:15:19.765742   13628 out.go:239] * 
	* 
	W0318 04:15:19.768728   13628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:15:19.778654   13628 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.784534625s)

                                                
                                                
-- stdout --
	* [false-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-466000" primary control-plane node in "false-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:15:22.255746   13746 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:15:22.255883   13746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:22.255886   13746 out.go:304] Setting ErrFile to fd 2...
	I0318 04:15:22.255888   13746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:22.256004   13746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:15:22.257035   13746 out.go:298] Setting JSON to false
	I0318 04:15:22.273513   13746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8095,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:15:22.273585   13746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:15:22.280081   13746 out.go:177] * [false-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:15:22.288016   13746 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:15:22.288059   13746 notify.go:220] Checking for updates...
	I0318 04:15:22.291082   13746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:15:22.294992   13746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:15:22.297947   13746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:15:22.301016   13746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:15:22.303892   13746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:15:22.307364   13746 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:22.307458   13746 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:22.307505   13746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:15:22.312038   13746 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:15:22.318949   13746 start.go:297] selected driver: qemu2
	I0318 04:15:22.318960   13746 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:15:22.318965   13746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:15:22.321189   13746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:15:22.323964   13746 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:15:22.327035   13746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:15:22.327077   13746 cni.go:84] Creating CNI manager for "false"
	I0318 04:15:22.327111   13746 start.go:340] cluster config:
	{Name:false-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:15:22.331564   13746 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:15:22.339831   13746 out.go:177] * Starting "false-466000" primary control-plane node in "false-466000" cluster
	I0318 04:15:22.344013   13746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:15:22.344029   13746 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:15:22.344043   13746 cache.go:56] Caching tarball of preloaded images
	I0318 04:15:22.344101   13746 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:15:22.344107   13746 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:15:22.344175   13746 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/false-466000/config.json ...
	I0318 04:15:22.344186   13746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/false-466000/config.json: {Name:mkf84ffdf0c7b837e8f7ecf7aafd58bfbb55d81d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:15:22.344409   13746 start.go:360] acquireMachinesLock for false-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:22.344442   13746 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "false-466000"
	I0318 04:15:22.344456   13746 start.go:93] Provisioning new machine with config: &{Name:false-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:22.344488   13746 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:22.351970   13746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:22.369797   13746 start.go:159] libmachine.API.Create for "false-466000" (driver="qemu2")
	I0318 04:15:22.369825   13746 client.go:168] LocalClient.Create starting
	I0318 04:15:22.369882   13746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:22.369913   13746 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:22.369923   13746 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:22.369970   13746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:22.369992   13746 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:22.370002   13746 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:22.370425   13746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:22.513144   13746 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:22.597935   13746 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:22.597940   13746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:22.598125   13746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:22.610777   13746 main.go:141] libmachine: STDOUT: 
	I0318 04:15:22.610796   13746 main.go:141] libmachine: STDERR: 
	I0318 04:15:22.610845   13746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2 +20000M
	I0318 04:15:22.621998   13746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:22.622020   13746 main.go:141] libmachine: STDERR: 
	I0318 04:15:22.622031   13746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:22.622041   13746 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:22.622073   13746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a5:be:a5:a7:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:22.623991   13746 main.go:141] libmachine: STDOUT: 
	I0318 04:15:22.624006   13746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:22.624023   13746 client.go:171] duration metric: took 254.200875ms to LocalClient.Create
	I0318 04:15:24.626149   13746 start.go:128] duration metric: took 2.281715042s to createHost
	I0318 04:15:24.626206   13746 start.go:83] releasing machines lock for "false-466000", held for 2.281831708s
	W0318 04:15:24.626292   13746 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:24.637345   13746 out.go:177] * Deleting "false-466000" in qemu2 ...
	W0318 04:15:24.668372   13746 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:24.668397   13746 start.go:728] Will try again in 5 seconds ...
	I0318 04:15:29.670439   13746 start.go:360] acquireMachinesLock for false-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:29.670949   13746 start.go:364] duration metric: took 396.792µs to acquireMachinesLock for "false-466000"
	I0318 04:15:29.671073   13746 start.go:93] Provisioning new machine with config: &{Name:false-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:29.671367   13746 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:29.681980   13746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:29.731869   13746 start.go:159] libmachine.API.Create for "false-466000" (driver="qemu2")
	I0318 04:15:29.731926   13746 client.go:168] LocalClient.Create starting
	I0318 04:15:29.732022   13746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:29.732080   13746 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:29.732099   13746 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:29.732154   13746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:29.732201   13746 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:29.732218   13746 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:29.732729   13746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:29.883261   13746 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:29.941753   13746 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:29.941760   13746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:29.941956   13746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:29.954110   13746 main.go:141] libmachine: STDOUT: 
	I0318 04:15:29.954133   13746 main.go:141] libmachine: STDERR: 
	I0318 04:15:29.954182   13746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2 +20000M
	I0318 04:15:29.964922   13746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:29.964940   13746 main.go:141] libmachine: STDERR: 
	I0318 04:15:29.964964   13746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:29.964968   13746 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:29.965003   13746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:be:ca:ba:03:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/false-466000/disk.qcow2
	I0318 04:15:29.966749   13746 main.go:141] libmachine: STDOUT: 
	I0318 04:15:29.966763   13746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:29.966775   13746 client.go:171] duration metric: took 234.851541ms to LocalClient.Create
	I0318 04:15:31.968895   13746 start.go:128] duration metric: took 2.2975715s to createHost
	I0318 04:15:31.969009   13746 start.go:83] releasing machines lock for "false-466000", held for 2.298092041s
	W0318 04:15:31.969465   13746 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:31.977001   13746 out.go:177] 
	W0318 04:15:31.984086   13746 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:15:31.984116   13746 out.go:239] * 
	* 
	W0318 04:15:31.987232   13746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:15:31.994925   13746 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.979196125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-466000" primary control-plane node in "enable-default-cni-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:15:34.274426   13856 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:15:34.274541   13856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:34.274544   13856 out.go:304] Setting ErrFile to fd 2...
	I0318 04:15:34.274547   13856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:34.274675   13856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:15:34.275755   13856 out.go:298] Setting JSON to false
	I0318 04:15:34.291693   13856 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8107,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:15:34.291747   13856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:15:34.297502   13856 out.go:177] * [enable-default-cni-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:15:34.304540   13856 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:15:34.308494   13856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:15:34.304583   13856 notify.go:220] Checking for updates...
	I0318 04:15:34.314400   13856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:15:34.317501   13856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:15:34.320560   13856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:15:34.323497   13856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:15:34.326905   13856 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:34.326977   13856 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:34.327032   13856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:15:34.331552   13856 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:15:34.338488   13856 start.go:297] selected driver: qemu2
	I0318 04:15:34.338493   13856 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:15:34.338498   13856 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:15:34.340778   13856 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:15:34.344499   13856 out.go:177] * Automatically selected the socket_vmnet network
	E0318 04:15:34.347522   13856 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 04:15:34.347537   13856 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:15:34.347594   13856 cni.go:84] Creating CNI manager for "bridge"
	I0318 04:15:34.347599   13856 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:15:34.347641   13856 start.go:340] cluster config:
	{Name:enable-default-cni-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:15:34.352165   13856 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:15:34.360358   13856 out.go:177] * Starting "enable-default-cni-466000" primary control-plane node in "enable-default-cni-466000" cluster
	I0318 04:15:34.364438   13856 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:15:34.364452   13856 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:15:34.364462   13856 cache.go:56] Caching tarball of preloaded images
	I0318 04:15:34.364514   13856 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:15:34.364520   13856 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:15:34.364579   13856 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/enable-default-cni-466000/config.json ...
	I0318 04:15:34.364590   13856 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/enable-default-cni-466000/config.json: {Name:mkc3db36ca0a3253cd09a3c5108d9da39fa89488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:15:34.364821   13856 start.go:360] acquireMachinesLock for enable-default-cni-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:34.364856   13856 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "enable-default-cni-466000"
	I0318 04:15:34.364871   13856 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:34.364900   13856 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:34.368427   13856 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:34.386173   13856 start.go:159] libmachine.API.Create for "enable-default-cni-466000" (driver="qemu2")
	I0318 04:15:34.386200   13856 client.go:168] LocalClient.Create starting
	I0318 04:15:34.386252   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:34.386283   13856 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:34.386291   13856 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:34.386337   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:34.386358   13856 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:34.386366   13856 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:34.386716   13856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:34.525029   13856 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:34.622287   13856 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:34.622295   13856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:34.622509   13856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:34.634947   13856 main.go:141] libmachine: STDOUT: 
	I0318 04:15:34.634966   13856 main.go:141] libmachine: STDERR: 
	I0318 04:15:34.635029   13856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2 +20000M
	I0318 04:15:34.645800   13856 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:34.645816   13856 main.go:141] libmachine: STDERR: 
	I0318 04:15:34.645831   13856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:34.645839   13856 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:34.645871   13856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5f:2a:6a:89:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:34.647579   13856 main.go:141] libmachine: STDOUT: 
	I0318 04:15:34.647595   13856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:34.647614   13856 client.go:171] duration metric: took 261.416791ms to LocalClient.Create
	I0318 04:15:36.649767   13856 start.go:128] duration metric: took 2.284912167s to createHost
	I0318 04:15:36.649857   13856 start.go:83] releasing machines lock for "enable-default-cni-466000", held for 2.285066666s
	W0318 04:15:36.649940   13856 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:36.664394   13856 out.go:177] * Deleting "enable-default-cni-466000" in qemu2 ...
	W0318 04:15:36.685712   13856 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:36.685739   13856 start.go:728] Will try again in 5 seconds ...
	I0318 04:15:41.686952   13856 start.go:360] acquireMachinesLock for enable-default-cni-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:41.687496   13856 start.go:364] duration metric: took 370.375µs to acquireMachinesLock for "enable-default-cni-466000"
	I0318 04:15:41.687653   13856 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:41.687903   13856 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:41.696687   13856 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:41.746937   13856 start.go:159] libmachine.API.Create for "enable-default-cni-466000" (driver="qemu2")
	I0318 04:15:41.746994   13856 client.go:168] LocalClient.Create starting
	I0318 04:15:41.747100   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:41.747162   13856 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:41.747175   13856 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:41.747236   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:41.747291   13856 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:41.747304   13856 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:41.747833   13856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:42.071414   13856 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:42.151054   13856 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:42.151060   13856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:42.151263   13856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:42.163475   13856 main.go:141] libmachine: STDOUT: 
	I0318 04:15:42.163495   13856 main.go:141] libmachine: STDERR: 
	I0318 04:15:42.163575   13856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2 +20000M
	I0318 04:15:42.174200   13856 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:42.174231   13856 main.go:141] libmachine: STDERR: 
	I0318 04:15:42.174246   13856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:42.174250   13856 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:42.174292   13856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:72:bf:99:60:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/enable-default-cni-466000/disk.qcow2
	I0318 04:15:42.176053   13856 main.go:141] libmachine: STDOUT: 
	I0318 04:15:42.176073   13856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:42.176085   13856 client.go:171] duration metric: took 429.100792ms to LocalClient.Create
	I0318 04:15:44.178188   13856 start.go:128] duration metric: took 2.490336458s to createHost
	I0318 04:15:44.178250   13856 start.go:83] releasing machines lock for "enable-default-cni-466000", held for 2.490809875s
	W0318 04:15:44.178544   13856 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:44.188048   13856 out.go:177] 
	W0318 04:15:44.195191   13856 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:15:44.195217   13856 out.go:239] * 
	* 
	W0318 04:15:44.197879   13856 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:15:44.208082   13856 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.825752375s)

                                                
                                                
-- stdout --
	* [flannel-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-466000" primary control-plane node in "flannel-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:15:46.508256   13971 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:15:46.508410   13971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:46.508414   13971 out.go:304] Setting ErrFile to fd 2...
	I0318 04:15:46.508416   13971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:46.508536   13971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:15:46.509575   13971 out.go:298] Setting JSON to false
	I0318 04:15:46.525525   13971 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8119,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:15:46.525598   13971 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:15:46.531209   13971 out.go:177] * [flannel-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:15:46.538054   13971 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:15:46.541115   13971 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:15:46.538093   13971 notify.go:220] Checking for updates...
	I0318 04:15:46.548045   13971 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:15:46.552076   13971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:15:46.555062   13971 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:15:46.558101   13971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:15:46.561464   13971 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:46.561532   13971 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:46.561578   13971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:15:46.566037   13971 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:15:46.573166   13971 start.go:297] selected driver: qemu2
	I0318 04:15:46.573172   13971 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:15:46.573178   13971 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:15:46.575469   13971 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:15:46.578993   13971 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:15:46.582118   13971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:15:46.582160   13971 cni.go:84] Creating CNI manager for "flannel"
	I0318 04:15:46.582172   13971 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0318 04:15:46.582214   13971 start.go:340] cluster config:
	{Name:flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:15:46.586625   13971 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:15:46.592094   13971 out.go:177] * Starting "flannel-466000" primary control-plane node in "flannel-466000" cluster
	I0318 04:15:46.596063   13971 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:15:46.596079   13971 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:15:46.596092   13971 cache.go:56] Caching tarball of preloaded images
	I0318 04:15:46.596164   13971 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:15:46.596190   13971 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:15:46.596275   13971 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/flannel-466000/config.json ...
	I0318 04:15:46.596294   13971 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/flannel-466000/config.json: {Name:mkd57597e964f488452cd88ffed8cbe40c23c593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:15:46.596516   13971 start.go:360] acquireMachinesLock for flannel-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:46.596550   13971 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "flannel-466000"
	I0318 04:15:46.596564   13971 start.go:93] Provisioning new machine with config: &{Name:flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:46.596614   13971 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:46.605070   13971 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:46.622766   13971 start.go:159] libmachine.API.Create for "flannel-466000" (driver="qemu2")
	I0318 04:15:46.622793   13971 client.go:168] LocalClient.Create starting
	I0318 04:15:46.622856   13971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:46.622884   13971 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:46.622894   13971 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:46.622940   13971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:46.622961   13971 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:46.622969   13971 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:46.623341   13971 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:46.764543   13971 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:46.864920   13971 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:46.864927   13971 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:46.865123   13971 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:46.877534   13971 main.go:141] libmachine: STDOUT: 
	I0318 04:15:46.877561   13971 main.go:141] libmachine: STDERR: 
	I0318 04:15:46.877623   13971 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2 +20000M
	I0318 04:15:46.888128   13971 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:46.888145   13971 main.go:141] libmachine: STDERR: 
	I0318 04:15:46.888177   13971 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:46.888182   13971 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:46.888226   13971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:3f:7f:ab:21:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:46.889945   13971 main.go:141] libmachine: STDOUT: 
	I0318 04:15:46.889966   13971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:46.889987   13971 client.go:171] duration metric: took 267.19825ms to LocalClient.Create
	I0318 04:15:48.892178   13971 start.go:128] duration metric: took 2.295608083s to createHost
	I0318 04:15:48.892237   13971 start.go:83] releasing machines lock for "flannel-466000", held for 2.295753667s
	W0318 04:15:48.892305   13971 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:48.903548   13971 out.go:177] * Deleting "flannel-466000" in qemu2 ...
	W0318 04:15:48.932202   13971 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:48.932234   13971 start.go:728] Will try again in 5 seconds ...
	I0318 04:15:53.934285   13971 start.go:360] acquireMachinesLock for flannel-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:53.934792   13971 start.go:364] duration metric: took 375.209µs to acquireMachinesLock for "flannel-466000"
	I0318 04:15:53.934920   13971 start.go:93] Provisioning new machine with config: &{Name:flannel-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:53.935154   13971 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:53.950856   13971 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:53.999423   13971 start.go:159] libmachine.API.Create for "flannel-466000" (driver="qemu2")
	I0318 04:15:53.999475   13971 client.go:168] LocalClient.Create starting
	I0318 04:15:53.999584   13971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:53.999641   13971 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:53.999655   13971 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:53.999720   13971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:53.999760   13971 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:53.999775   13971 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:54.000304   13971 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:54.150989   13971 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:54.230944   13971 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:54.230949   13971 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:54.231137   13971 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:54.243629   13971 main.go:141] libmachine: STDOUT: 
	I0318 04:15:54.243649   13971 main.go:141] libmachine: STDERR: 
	I0318 04:15:54.243703   13971 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2 +20000M
	I0318 04:15:54.254284   13971 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:54.254306   13971 main.go:141] libmachine: STDERR: 
	I0318 04:15:54.254317   13971 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:54.254322   13971 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:54.254354   13971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4c:37:7c:7e:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/flannel-466000/disk.qcow2
	I0318 04:15:54.256091   13971 main.go:141] libmachine: STDOUT: 
	I0318 04:15:54.256108   13971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:54.256120   13971 client.go:171] duration metric: took 256.646708ms to LocalClient.Create
	I0318 04:15:56.258223   13971 start.go:128] duration metric: took 2.323114417s to createHost
	I0318 04:15:56.258273   13971 start.go:83] releasing machines lock for "flannel-466000", held for 2.323530541s
	W0318 04:15:56.258693   13971 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:15:56.274402   13971 out.go:177] 
	W0318 04:15:56.277582   13971 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:15:56.277608   13971 out.go:239] * 
	* 
	W0318 04:15:56.280062   13971 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:15:56.290401   13971 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.823199791s)

                                                
                                                
-- stdout --
	* [bridge-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-466000" primary control-plane node in "bridge-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:15:58.774366   14089 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:15:58.774506   14089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:58.774509   14089 out.go:304] Setting ErrFile to fd 2...
	I0318 04:15:58.774512   14089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:15:58.774646   14089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:15:58.775721   14089 out.go:298] Setting JSON to false
	I0318 04:15:58.791739   14089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8131,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:15:58.791805   14089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:15:58.797660   14089 out.go:177] * [bridge-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:15:58.804671   14089 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:15:58.809665   14089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:15:58.804733   14089 notify.go:220] Checking for updates...
	I0318 04:15:58.816661   14089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:15:58.820711   14089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:15:58.823678   14089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:15:58.826649   14089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:15:58.830017   14089 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:58.830094   14089 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:15:58.830147   14089 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:15:58.833608   14089 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:15:58.840683   14089 start.go:297] selected driver: qemu2
	I0318 04:15:58.840690   14089 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:15:58.840696   14089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:15:58.843003   14089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:15:58.846635   14089 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:15:58.850699   14089 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:15:58.850743   14089 cni.go:84] Creating CNI manager for "bridge"
	I0318 04:15:58.850753   14089 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:15:58.850783   14089 start.go:340] cluster config:
	{Name:bridge-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:15:58.855387   14089 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:15:58.863670   14089 out.go:177] * Starting "bridge-466000" primary control-plane node in "bridge-466000" cluster
	I0318 04:15:58.867641   14089 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:15:58.867657   14089 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:15:58.867667   14089 cache.go:56] Caching tarball of preloaded images
	I0318 04:15:58.867736   14089 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:15:58.867743   14089 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:15:58.867804   14089 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/bridge-466000/config.json ...
	I0318 04:15:58.867819   14089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/bridge-466000/config.json: {Name:mkcaaa42059d2766c0b1d17094f9a023811f0d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:15:58.868036   14089 start.go:360] acquireMachinesLock for bridge-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:15:58.868070   14089 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "bridge-466000"
	I0318 04:15:58.868084   14089 start.go:93] Provisioning new machine with config: &{Name:bridge-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:15:58.868114   14089 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:15:58.872701   14089 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:15:58.889546   14089 start.go:159] libmachine.API.Create for "bridge-466000" (driver="qemu2")
	I0318 04:15:58.889573   14089 client.go:168] LocalClient.Create starting
	I0318 04:15:58.889640   14089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:15:58.889671   14089 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:58.889687   14089 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:58.889732   14089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:15:58.889754   14089 main.go:141] libmachine: Decoding PEM data...
	I0318 04:15:58.889761   14089 main.go:141] libmachine: Parsing certificate...
	I0318 04:15:58.890137   14089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:15:59.098025   14089 main.go:141] libmachine: Creating SSH key...
	I0318 04:15:59.124413   14089 main.go:141] libmachine: Creating Disk image...
	I0318 04:15:59.124422   14089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:15:59.124607   14089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:15:59.136822   14089 main.go:141] libmachine: STDOUT: 
	I0318 04:15:59.136840   14089 main.go:141] libmachine: STDERR: 
	I0318 04:15:59.136899   14089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2 +20000M
	I0318 04:15:59.147704   14089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:15:59.147721   14089 main.go:141] libmachine: STDERR: 
	I0318 04:15:59.147730   14089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:15:59.147736   14089 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:15:59.147771   14089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:32:b9:b6:83:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:15:59.149488   14089 main.go:141] libmachine: STDOUT: 
	I0318 04:15:59.149504   14089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:15:59.149522   14089 client.go:171] duration metric: took 259.952834ms to LocalClient.Create
	I0318 04:16:01.151779   14089 start.go:128] duration metric: took 2.283694875s to createHost
	I0318 04:16:01.151845   14089 start.go:83] releasing machines lock for "bridge-466000", held for 2.283842084s
	W0318 04:16:01.151898   14089 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:01.164704   14089 out.go:177] * Deleting "bridge-466000" in qemu2 ...
	W0318 04:16:01.190234   14089 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:01.190270   14089 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:06.192288   14089 start.go:360] acquireMachinesLock for bridge-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:06.192737   14089 start.go:364] duration metric: took 358.166µs to acquireMachinesLock for "bridge-466000"
	I0318 04:16:06.192858   14089 start.go:93] Provisioning new machine with config: &{Name:bridge-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:06.193143   14089 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:06.203821   14089 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:16:06.254095   14089 start.go:159] libmachine.API.Create for "bridge-466000" (driver="qemu2")
	I0318 04:16:06.254152   14089 client.go:168] LocalClient.Create starting
	I0318 04:16:06.254279   14089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:06.254333   14089 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:06.254352   14089 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:06.254422   14089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:06.254463   14089 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:06.254475   14089 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:06.254997   14089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:06.405779   14089 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:06.490267   14089 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:06.490273   14089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:06.490451   14089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:16:06.502724   14089 main.go:141] libmachine: STDOUT: 
	I0318 04:16:06.502742   14089 main.go:141] libmachine: STDERR: 
	I0318 04:16:06.502803   14089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2 +20000M
	I0318 04:16:06.513428   14089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:06.513463   14089 main.go:141] libmachine: STDERR: 
	I0318 04:16:06.513474   14089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:16:06.513479   14089 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:06.513511   14089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:94:e5:de:60:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/bridge-466000/disk.qcow2
	I0318 04:16:06.515221   14089 main.go:141] libmachine: STDOUT: 
	I0318 04:16:06.515237   14089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:06.515249   14089 client.go:171] duration metric: took 261.097834ms to LocalClient.Create
	I0318 04:16:08.517359   14089 start.go:128] duration metric: took 2.324260875s to createHost
	I0318 04:16:08.517417   14089 start.go:83] releasing machines lock for "bridge-466000", held for 2.324730625s
	W0318 04:16:08.517793   14089 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:08.534407   14089 out.go:177] 
	W0318 04:16:08.538485   14089 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:08.538513   14089 out.go:239] * 
	* 
	W0318 04:16:08.540871   14089 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:16:08.553411   14089 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-466000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.86213125s)

                                                
                                                
-- stdout --
	* [kubenet-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-466000" primary control-plane node in "kubenet-466000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-466000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:10.831554   14201 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:10.831699   14201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:10.831702   14201 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:10.831704   14201 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:10.831840   14201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:10.832883   14201 out.go:298] Setting JSON to false
	I0318 04:16:10.848867   14201 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8143,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:16:10.848927   14201 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:16:10.855277   14201 out.go:177] * [kubenet-466000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:16:10.862192   14201 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:16:10.862244   14201 notify.go:220] Checking for updates...
	I0318 04:16:10.871279   14201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:16:10.874265   14201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:16:10.877277   14201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:16:10.880292   14201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:16:10.883303   14201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:16:10.886610   14201 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:10.886682   14201 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:10.886728   14201 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:16:10.891299   14201 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:16:10.898227   14201 start.go:297] selected driver: qemu2
	I0318 04:16:10.898231   14201 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:16:10.898237   14201 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:16:10.900485   14201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:16:10.903282   14201 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:16:10.906272   14201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:16:10.906315   14201 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0318 04:16:10.906345   14201 start.go:340] cluster config:
	{Name:kubenet-466000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:10.910813   14201 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:10.919254   14201 out.go:177] * Starting "kubenet-466000" primary control-plane node in "kubenet-466000" cluster
	I0318 04:16:10.923241   14201 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:16:10.923282   14201 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:16:10.923293   14201 cache.go:56] Caching tarball of preloaded images
	I0318 04:16:10.923353   14201 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:16:10.923360   14201 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:16:10.923423   14201 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kubenet-466000/config.json ...
	I0318 04:16:10.923434   14201 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/kubenet-466000/config.json: {Name:mkf7db54406ba6662cf83e651ac04793f3047978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:16:10.923656   14201 start.go:360] acquireMachinesLock for kubenet-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:10.923689   14201 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "kubenet-466000"
	I0318 04:16:10.923703   14201 start.go:93] Provisioning new machine with config: &{Name:kubenet-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:10.923736   14201 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:10.927345   14201 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:16:10.945125   14201 start.go:159] libmachine.API.Create for "kubenet-466000" (driver="qemu2")
	I0318 04:16:10.945154   14201 client.go:168] LocalClient.Create starting
	I0318 04:16:10.945224   14201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:10.945254   14201 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:10.945265   14201 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:10.945313   14201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:10.945336   14201 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:10.945342   14201 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:10.945703   14201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:11.084223   14201 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:11.171622   14201 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:11.171628   14201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:11.171805   14201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:11.184390   14201 main.go:141] libmachine: STDOUT: 
	I0318 04:16:11.184413   14201 main.go:141] libmachine: STDERR: 
	I0318 04:16:11.184469   14201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2 +20000M
	I0318 04:16:11.195155   14201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:11.195169   14201 main.go:141] libmachine: STDERR: 
	I0318 04:16:11.195180   14201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:11.195184   14201 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:11.195218   14201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:44:16:06:e4:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:11.196935   14201 main.go:141] libmachine: STDOUT: 
	I0318 04:16:11.196958   14201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:11.196977   14201 client.go:171] duration metric: took 251.825709ms to LocalClient.Create
	I0318 04:16:13.199143   14201 start.go:128] duration metric: took 2.27545925s to createHost
	I0318 04:16:13.199211   14201 start.go:83] releasing machines lock for "kubenet-466000", held for 2.27558825s
	W0318 04:16:13.199272   14201 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:13.215359   14201 out.go:177] * Deleting "kubenet-466000" in qemu2 ...
	W0318 04:16:13.240249   14201 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:13.240293   14201 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:18.241274   14201 start.go:360] acquireMachinesLock for kubenet-466000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:18.241656   14201 start.go:364] duration metric: took 308.666µs to acquireMachinesLock for "kubenet-466000"
	I0318 04:16:18.241790   14201 start.go:93] Provisioning new machine with config: &{Name:kubenet-466000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-466000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:18.242025   14201 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:18.252787   14201 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:16:18.300992   14201 start.go:159] libmachine.API.Create for "kubenet-466000" (driver="qemu2")
	I0318 04:16:18.301057   14201 client.go:168] LocalClient.Create starting
	I0318 04:16:18.301164   14201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:18.301241   14201 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:18.301259   14201 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:18.301305   14201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:18.301347   14201 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:18.301361   14201 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:18.301856   14201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:18.453031   14201 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:18.590847   14201 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:18.590856   14201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:18.591040   14201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:18.603448   14201 main.go:141] libmachine: STDOUT: 
	I0318 04:16:18.603468   14201 main.go:141] libmachine: STDERR: 
	I0318 04:16:18.603534   14201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2 +20000M
	I0318 04:16:18.614326   14201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:18.614342   14201 main.go:141] libmachine: STDERR: 
	I0318 04:16:18.614353   14201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:18.614358   14201 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:18.614400   14201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7c:c2:be:ad:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/kubenet-466000/disk.qcow2
	I0318 04:16:18.616080   14201 main.go:141] libmachine: STDOUT: 
	I0318 04:16:18.616093   14201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:18.616107   14201 client.go:171] duration metric: took 315.053667ms to LocalClient.Create
	I0318 04:16:20.618319   14201 start.go:128] duration metric: took 2.376316708s to createHost
	I0318 04:16:20.618421   14201 start.go:83] releasing machines lock for "kubenet-466000", held for 2.37681725s
	W0318 04:16:20.618788   14201 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-466000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:20.628400   14201 out.go:177] 
	W0318 04:16:20.635613   14201 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:20.635643   14201 out.go:239] * 
	* 
	W0318 04:16:20.638325   14201 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:16:20.648454   14201 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.843685667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-492000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-492000" primary control-plane node in "old-k8s-version-492000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:22.934414   14311 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:22.934530   14311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:22.934538   14311 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:22.934542   14311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:22.934675   14311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:22.935727   14311 out.go:298] Setting JSON to false
	I0318 04:16:22.951916   14311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8155,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:16:22.951974   14311 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:16:22.958422   14311 out.go:177] * [old-k8s-version-492000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:16:22.965331   14311 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:16:22.968398   14311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:16:22.965371   14311 notify.go:220] Checking for updates...
	I0318 04:16:22.974338   14311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:16:22.978429   14311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:16:22.981399   14311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:16:22.984380   14311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:16:22.987752   14311 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:22.987821   14311 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:22.987894   14311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:16:22.992400   14311 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:16:22.999352   14311 start.go:297] selected driver: qemu2
	I0318 04:16:22.999359   14311 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:16:22.999365   14311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:16:23.001680   14311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:16:23.004384   14311 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:16:23.005855   14311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:16:23.005894   14311 cni.go:84] Creating CNI manager for ""
	I0318 04:16:23.005903   14311 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:16:23.005931   14311 start.go:340] cluster config:
	{Name:old-k8s-version-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:23.010327   14311 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:23.018419   14311 out.go:177] * Starting "old-k8s-version-492000" primary control-plane node in "old-k8s-version-492000" cluster
	I0318 04:16:23.021395   14311 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:16:23.021409   14311 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:16:23.021417   14311 cache.go:56] Caching tarball of preloaded images
	I0318 04:16:23.021473   14311 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:16:23.021479   14311 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:16:23.021548   14311 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/old-k8s-version-492000/config.json ...
	I0318 04:16:23.021559   14311 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/old-k8s-version-492000/config.json: {Name:mka9ec2996b1a519ef2e8b40d08fba76daca641f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:16:23.021873   14311 start.go:360] acquireMachinesLock for old-k8s-version-492000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:23.021908   14311 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "old-k8s-version-492000"
	I0318 04:16:23.021921   14311 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:23.021952   14311 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:23.026203   14311 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:16:23.044121   14311 start.go:159] libmachine.API.Create for "old-k8s-version-492000" (driver="qemu2")
	I0318 04:16:23.044146   14311 client.go:168] LocalClient.Create starting
	I0318 04:16:23.044201   14311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:23.044231   14311 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:23.044241   14311 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:23.044290   14311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:23.044313   14311 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:23.044322   14311 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:23.044728   14311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:23.184861   14311 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:23.269882   14311 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:23.269888   14311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:23.270085   14311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:23.282734   14311 main.go:141] libmachine: STDOUT: 
	I0318 04:16:23.282751   14311 main.go:141] libmachine: STDERR: 
	I0318 04:16:23.282806   14311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2 +20000M
	I0318 04:16:23.294054   14311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:23.294070   14311 main.go:141] libmachine: STDERR: 
	I0318 04:16:23.294081   14311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:23.294086   14311 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:23.294119   14311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c7:89:00:40:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:23.295941   14311 main.go:141] libmachine: STDOUT: 
	I0318 04:16:23.295960   14311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:23.295978   14311 client.go:171] duration metric: took 251.835166ms to LocalClient.Create
	I0318 04:16:25.298096   14311 start.go:128] duration metric: took 2.276196541s to createHost
	I0318 04:16:25.298177   14311 start.go:83] releasing machines lock for "old-k8s-version-492000", held for 2.27633575s
	W0318 04:16:25.298275   14311 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:25.309351   14311 out.go:177] * Deleting "old-k8s-version-492000" in qemu2 ...
	W0318 04:16:25.339957   14311 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:25.339984   14311 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:30.342000   14311 start.go:360] acquireMachinesLock for old-k8s-version-492000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:30.342471   14311 start.go:364] duration metric: took 357.042µs to acquireMachinesLock for "old-k8s-version-492000"
	I0318 04:16:30.342604   14311 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:30.342876   14311 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:30.354462   14311 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:16:30.402959   14311 start.go:159] libmachine.API.Create for "old-k8s-version-492000" (driver="qemu2")
	I0318 04:16:30.403010   14311 client.go:168] LocalClient.Create starting
	I0318 04:16:30.403108   14311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:30.403168   14311 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:30.403189   14311 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:30.403256   14311 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:30.403302   14311 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:30.403313   14311 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:30.403857   14311 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:30.560138   14311 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:30.609647   14311 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:30.609652   14311 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:30.609835   14311 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:30.622445   14311 main.go:141] libmachine: STDOUT: 
	I0318 04:16:30.622474   14311 main.go:141] libmachine: STDERR: 
	I0318 04:16:30.622531   14311 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2 +20000M
	I0318 04:16:30.702725   14311 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:30.702765   14311 main.go:141] libmachine: STDERR: 
	I0318 04:16:30.702784   14311 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:30.702792   14311 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:30.702841   14311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:98:5f:48:23:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:30.705542   14311 main.go:141] libmachine: STDOUT: 
	I0318 04:16:30.705565   14311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:30.705582   14311 client.go:171] duration metric: took 302.575625ms to LocalClient.Create
	I0318 04:16:32.707702   14311 start.go:128] duration metric: took 2.364878083s to createHost
	I0318 04:16:32.707780   14311 start.go:83] releasing machines lock for "old-k8s-version-492000", held for 2.36536475s
	W0318 04:16:32.708160   14311 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:32.717784   14311 out.go:177] 
	W0318 04:16:32.720773   14311 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:32.720828   14311 out.go:239] * 
	* 
	W0318 04:16:32.723394   14311 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:16:32.733778   14311 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (71.839958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-492000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-492000 create -f testdata/busybox.yaml: exit status 1 (29.385042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-492000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-492000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.438416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.042375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-492000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-492000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-492000 describe deploy/metrics-server -n kube-system: exit status 1 (27.094875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-492000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-492000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.789042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.189202792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-492000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-492000" primary control-plane node in "old-k8s-version-492000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:36.902698   14366 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:36.902816   14366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:36.902819   14366 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:36.902822   14366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:36.902964   14366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:36.903993   14366 out.go:298] Setting JSON to false
	I0318 04:16:36.920108   14366 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8169,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:16:36.920173   14366 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:16:36.924604   14366 out.go:177] * [old-k8s-version-492000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:16:36.930548   14366 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:16:36.930619   14366 notify.go:220] Checking for updates...
	I0318 04:16:36.937447   14366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:16:36.941544   14366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:16:36.944541   14366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:16:36.947479   14366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:16:36.950527   14366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:16:36.953937   14366 config.go:182] Loaded profile config "old-k8s-version-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:16:36.957592   14366 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:16:36.960571   14366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:16:36.964527   14366 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:16:36.971530   14366 start.go:297] selected driver: qemu2
	I0318 04:16:36.971536   14366 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:36.971604   14366 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:16:36.973977   14366 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:16:36.974024   14366 cni.go:84] Creating CNI manager for ""
	I0318 04:16:36.974030   14366 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:16:36.974050   14366 start.go:340] cluster config:
	{Name:old-k8s-version-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-492000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:36.978457   14366 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:36.986522   14366 out.go:177] * Starting "old-k8s-version-492000" primary control-plane node in "old-k8s-version-492000" cluster
	I0318 04:16:36.990586   14366 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:16:36.990609   14366 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:16:36.990622   14366 cache.go:56] Caching tarball of preloaded images
	I0318 04:16:36.990680   14366 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:16:36.990691   14366 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:16:36.990748   14366 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/old-k8s-version-492000/config.json ...
	I0318 04:16:36.991258   14366 start.go:360] acquireMachinesLock for old-k8s-version-492000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:36.991288   14366 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "old-k8s-version-492000"
	I0318 04:16:36.991298   14366 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:16:36.991303   14366 fix.go:54] fixHost starting: 
	I0318 04:16:36.991434   14366 fix.go:112] recreateIfNeeded on old-k8s-version-492000: state=Stopped err=<nil>
	W0318 04:16:36.991443   14366 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:16:36.995526   14366 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-492000" ...
	I0318 04:16:37.002581   14366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:98:5f:48:23:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:37.004857   14366 main.go:141] libmachine: STDOUT: 
	I0318 04:16:37.004881   14366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:37.004913   14366 fix.go:56] duration metric: took 13.609708ms for fixHost
	I0318 04:16:37.004918   14366 start.go:83] releasing machines lock for "old-k8s-version-492000", held for 13.625375ms
	W0318 04:16:37.004926   14366 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:37.004961   14366 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:37.004967   14366 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:42.006916   14366 start.go:360] acquireMachinesLock for old-k8s-version-492000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:42.007212   14366 start.go:364] duration metric: took 240.334µs to acquireMachinesLock for "old-k8s-version-492000"
	I0318 04:16:42.007328   14366 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:16:42.007343   14366 fix.go:54] fixHost starting: 
	I0318 04:16:42.007989   14366 fix.go:112] recreateIfNeeded on old-k8s-version-492000: state=Stopped err=<nil>
	W0318 04:16:42.008014   14366 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:16:42.015088   14366 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-492000" ...
	I0318 04:16:42.018207   14366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:98:5f:48:23:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/old-k8s-version-492000/disk.qcow2
	I0318 04:16:42.027891   14366 main.go:141] libmachine: STDOUT: 
	I0318 04:16:42.027975   14366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:42.028058   14366 fix.go:56] duration metric: took 20.713916ms for fixHost
	I0318 04:16:42.028080   14366 start.go:83] releasing machines lock for "old-k8s-version-492000", held for 20.845208ms
	W0318 04:16:42.028301   14366 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:42.036028   14366 out.go:177] 
	W0318 04:16:42.039206   14366 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:42.039228   14366 out.go:239] * 
	* 
	W0318 04:16:42.041395   14366 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:16:42.049012   14366 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-492000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (72.88325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-492000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (34.334667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-492000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.43325ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-492000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.270209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-492000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.119083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-492000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-492000 --alsologtostderr -v=1: exit status 83 (42.562583ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-492000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:42.334425   14385 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:42.334881   14385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:42.334890   14385 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:42.334893   14385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:42.335031   14385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:42.335241   14385 out.go:298] Setting JSON to false
	I0318 04:16:42.335250   14385 mustload.go:65] Loading cluster: old-k8s-version-492000
	I0318 04:16:42.335462   14385 config.go:182] Loaded profile config "old-k8s-version-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:16:42.339556   14385 out.go:177] * The control-plane node old-k8s-version-492000 host is not running: state=Stopped
	I0318 04:16:42.343528   14385 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-492000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-492000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (30.861417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (31.246375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.776326292s)

                                                
                                                
-- stdout --
	* [no-preload-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-586000" primary control-plane node in "no-preload-586000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-586000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:42.812995   14408 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:42.813111   14408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:42.813115   14408 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:42.813117   14408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:42.813237   14408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:42.814278   14408 out.go:298] Setting JSON to false
	I0318 04:16:42.830285   14408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8175,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:16:42.830340   14408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:16:42.834601   14408 out.go:177] * [no-preload-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:16:42.840488   14408 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:16:42.843499   14408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:16:42.840527   14408 notify.go:220] Checking for updates...
	I0318 04:16:42.849440   14408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:16:42.852494   14408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:16:42.855496   14408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:16:42.856917   14408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:16:42.860840   14408 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:42.860902   14408 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:16:42.860954   14408 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:16:42.865497   14408 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:16:42.870561   14408 start.go:297] selected driver: qemu2
	I0318 04:16:42.870568   14408 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:16:42.870577   14408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:16:42.872766   14408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:16:42.876517   14408 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:16:42.879592   14408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:16:42.879628   14408 cni.go:84] Creating CNI manager for ""
	I0318 04:16:42.879635   14408 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:16:42.879639   14408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:16:42.879669   14408 start.go:340] cluster config:
	{Name:no-preload-586000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:42.884009   14408 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.892535   14408 out.go:177] * Starting "no-preload-586000" primary control-plane node in "no-preload-586000" cluster
	I0318 04:16:42.896429   14408 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:16:42.896522   14408 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/no-preload-586000/config.json ...
	I0318 04:16:42.896538   14408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/no-preload-586000/config.json: {Name:mk6f009d2a95932921f0a1b6ab7d63be83e10bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:16:42.896578   14408 cache.go:107] acquiring lock: {Name:mke787a1c43e6857f491d2d911b14826b4716c98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896595   14408 cache.go:107] acquiring lock: {Name:mk21af4683ba0708281fbf96fe95d362f640aff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896645   14408 cache.go:107] acquiring lock: {Name:mk35972bbfda8ed156634dbf66f2119d970bd39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896657   14408 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:16:42.896665   14408 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.709µs
	I0318 04:16:42.896670   14408 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:16:42.896677   14408 cache.go:107] acquiring lock: {Name:mk2e1d483fe98e63ab456b49a6fa73c9acd5cc59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896690   14408 cache.go:107] acquiring lock: {Name:mk7d78b3f4d42cda30cdb299d276167fcfd89074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896595   14408 cache.go:107] acquiring lock: {Name:mke77e5196b755cb9adcd09a0be92eead30803ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896743   14408 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 04:16:42.896788   14408 start.go:360] acquireMachinesLock for no-preload-586000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:42.896813   14408 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 04:16:42.896821   14408 cache.go:107] acquiring lock: {Name:mkd1c251a22b697c0676d326e1af9df66c36d13d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896852   14408 cache.go:107] acquiring lock: {Name:mk584d8217e85acdd2a23fce74c17242b43049e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:42.896855   14408 start.go:364] duration metric: took 54.542µs to acquireMachinesLock for "no-preload-586000"
	I0318 04:16:42.896900   14408 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 04:16:42.896973   14408 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 04:16:42.896991   14408 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 04:16:42.897005   14408 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 04:16:42.897039   14408 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:16:42.896948   14408 start.go:93] Provisioning new machine with config: &{Name:no-preload-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:42.897115   14408 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:42.902462   14408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:16:42.906451   14408 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:16:42.907205   14408 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 04:16:42.907378   14408 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 04:16:42.912003   14408 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 04:16:42.912084   14408 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 04:16:42.912163   14408 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 04:16:42.912213   14408 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 04:16:42.919663   14408 start.go:159] libmachine.API.Create for "no-preload-586000" (driver="qemu2")
	I0318 04:16:42.919690   14408 client.go:168] LocalClient.Create starting
	I0318 04:16:42.919756   14408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:42.919784   14408 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:42.919792   14408 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:42.919835   14408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:42.919857   14408 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:42.919866   14408 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:42.920190   14408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:43.065186   14408 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:43.151932   14408 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:43.151952   14408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:43.152163   14408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:43.164722   14408 main.go:141] libmachine: STDOUT: 
	I0318 04:16:43.164742   14408 main.go:141] libmachine: STDERR: 
	I0318 04:16:43.164800   14408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2 +20000M
	I0318 04:16:43.177155   14408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:43.177173   14408 main.go:141] libmachine: STDERR: 
	I0318 04:16:43.177189   14408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:43.177194   14408 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:43.177247   14408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:05:78:52:53:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:43.179270   14408 main.go:141] libmachine: STDOUT: 
	I0318 04:16:43.179289   14408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:43.179309   14408 client.go:171] duration metric: took 259.622542ms to LocalClient.Create
	I0318 04:16:44.850849   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 04:16:44.905754   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 04:16:44.955242   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 04:16:44.970994   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0318 04:16:44.980559   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 04:16:44.995428   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 04:16:45.007632   14408 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 04:16:45.091492   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 04:16:45.091549   14408 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.194986667s
	I0318 04:16:45.091574   14408 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 04:16:45.179568   14408 start.go:128] duration metric: took 2.282512417s to createHost
	I0318 04:16:45.179611   14408 start.go:83] releasing machines lock for "no-preload-586000", held for 2.282790333s
	W0318 04:16:45.179661   14408 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:45.191299   14408 out.go:177] * Deleting "no-preload-586000" in qemu2 ...
	W0318 04:16:45.220724   14408 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:45.220755   14408 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:46.870022   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 04:16:46.870094   14408 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.973539708s
	I0318 04:16:46.870125   14408 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 04:16:47.983606   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 04:16:47.983654   14408 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 5.087019375s
	I0318 04:16:47.983679   14408 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 04:16:47.999230   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 04:16:47.999274   14408 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.102750542s
	I0318 04:16:47.999297   14408 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 04:16:49.255811   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 04:16:49.255869   14408 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.359501125s
	I0318 04:16:49.255895   14408 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 04:16:49.255903   14408 cache.go:157] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 04:16:49.255944   14408 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.359590667s
	I0318 04:16:49.255960   14408 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 04:16:50.220765   14408 start.go:360] acquireMachinesLock for no-preload-586000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:50.221169   14408 start.go:364] duration metric: took 330µs to acquireMachinesLock for "no-preload-586000"
	I0318 04:16:50.221302   14408 start.go:93] Provisioning new machine with config: &{Name:no-preload-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:16:50.221650   14408 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:16:50.226382   14408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:16:50.277689   14408 start.go:159] libmachine.API.Create for "no-preload-586000" (driver="qemu2")
	I0318 04:16:50.277744   14408 client.go:168] LocalClient.Create starting
	I0318 04:16:50.277861   14408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:16:50.277931   14408 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:50.277948   14408 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:50.278033   14408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:16:50.278075   14408 main.go:141] libmachine: Decoding PEM data...
	I0318 04:16:50.278090   14408 main.go:141] libmachine: Parsing certificate...
	I0318 04:16:50.278594   14408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:16:50.430323   14408 main.go:141] libmachine: Creating SSH key...
	I0318 04:16:50.478457   14408 main.go:141] libmachine: Creating Disk image...
	I0318 04:16:50.478462   14408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:16:50.478625   14408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:50.491306   14408 main.go:141] libmachine: STDOUT: 
	I0318 04:16:50.491328   14408 main.go:141] libmachine: STDERR: 
	I0318 04:16:50.491379   14408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2 +20000M
	I0318 04:16:50.502362   14408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:16:50.502387   14408 main.go:141] libmachine: STDERR: 
	I0318 04:16:50.502398   14408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:50.502403   14408 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:16:50.502444   14408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:14:80:7a:d2:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:50.504299   14408 main.go:141] libmachine: STDOUT: 
	I0318 04:16:50.504317   14408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:50.504330   14408 client.go:171] duration metric: took 226.588541ms to LocalClient.Create
	I0318 04:16:52.504879   14408 start.go:128] duration metric: took 2.283273583s to createHost
	I0318 04:16:52.504937   14408 start.go:83] releasing machines lock for "no-preload-586000", held for 2.283819584s
	W0318 04:16:52.505309   14408 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:52.521073   14408 out.go:177] 
	W0318 04:16:52.526122   14408 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:52.526151   14408 out.go:239] * 
	* 
	W0318 04:16:52.528512   14408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:16:52.541989   14408 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (65.36725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-586000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-586000 create -f testdata/busybox.yaml: exit status 1 (29.160958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-586000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-586000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.8215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.565292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-586000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-586000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-586000 describe deploy/metrics-server -n kube-system: exit status 1 (26.795792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-586000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-586000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.837375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.1958405s)

                                                
                                                
-- stdout --
	* [no-preload-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-586000" primary control-plane node in "no-preload-586000" cluster
	* Restarting existing qemu2 VM for "no-preload-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-586000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:16:54.852116   14479 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:16:54.852245   14479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:54.852248   14479 out.go:304] Setting ErrFile to fd 2...
	I0318 04:16:54.852250   14479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:16:54.852359   14479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:16:54.853380   14479 out.go:298] Setting JSON to false
	I0318 04:16:54.869427   14479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8187,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:16:54.869486   14479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:16:54.874463   14479 out.go:177] * [no-preload-586000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:16:54.883417   14479 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:16:54.883476   14479 notify.go:220] Checking for updates...
	I0318 04:16:54.892372   14479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:16:54.895412   14479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:16:54.898345   14479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:16:54.901395   14479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:16:54.904390   14479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:16:54.907673   14479 config.go:182] Loaded profile config "no-preload-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:16:54.907953   14479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:16:54.912339   14479 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:16:54.919322   14479 start.go:297] selected driver: qemu2
	I0318 04:16:54.919328   14479 start.go:901] validating driver "qemu2" against &{Name:no-preload-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:54.919386   14479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:16:54.921800   14479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:16:54.921842   14479 cni.go:84] Creating CNI manager for ""
	I0318 04:16:54.921852   14479 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:16:54.921885   14479 start.go:340] cluster config:
	{Name:no-preload-586000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-586000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:16:54.926405   14479 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.933361   14479 out.go:177] * Starting "no-preload-586000" primary control-plane node in "no-preload-586000" cluster
	I0318 04:16:54.937368   14479 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:16:54.937477   14479 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/no-preload-586000/config.json ...
	I0318 04:16:54.937482   14479 cache.go:107] acquiring lock: {Name:mke787a1c43e6857f491d2d911b14826b4716c98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937496   14479 cache.go:107] acquiring lock: {Name:mk7d78b3f4d42cda30cdb299d276167fcfd89074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937503   14479 cache.go:107] acquiring lock: {Name:mke77e5196b755cb9adcd09a0be92eead30803ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937554   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:16:54.937566   14479 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.667µs
	I0318 04:16:54.937567   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 04:16:54.937574   14479 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:16:54.937582   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 04:16:54.937585   14479 cache.go:107] acquiring lock: {Name:mk584d8217e85acdd2a23fce74c17242b43049e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937591   14479 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 101.208µs
	I0318 04:16:54.937596   14479 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 04:16:54.937605   14479 cache.go:107] acquiring lock: {Name:mk21af4683ba0708281fbf96fe95d362f640aff7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937576   14479 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 94.083µs
	I0318 04:16:54.937647   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 04:16:54.937627   14479 cache.go:107] acquiring lock: {Name:mk2e1d483fe98e63ab456b49a6fa73c9acd5cc59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937650   14479 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 04:16:54.937652   14479 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 48µs
	I0318 04:16:54.937678   14479 cache.go:107] acquiring lock: {Name:mkd1c251a22b697c0676d326e1af9df66c36d13d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937698   14479 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 04:16:54.937632   14479 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:16:54.937721   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 04:16:54.937729   14479 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 119.625µs
	I0318 04:16:54.937734   14479 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 04:16:54.937692   14479 cache.go:107] acquiring lock: {Name:mk35972bbfda8ed156634dbf66f2119d970bd39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:16:54.937752   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 04:16:54.937755   14479 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 102.75µs
	I0318 04:16:54.937759   14479 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 04:16:54.937799   14479 cache.go:115] /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 04:16:54.937808   14479 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 163.5µs
	I0318 04:16:54.937819   14479 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 04:16:54.937905   14479 start.go:360] acquireMachinesLock for no-preload-586000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:54.937944   14479 start.go:364] duration metric: took 33.25µs to acquireMachinesLock for "no-preload-586000"
	I0318 04:16:54.937954   14479 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:16:54.937961   14479 fix.go:54] fixHost starting: 
	I0318 04:16:54.938079   14479 fix.go:112] recreateIfNeeded on no-preload-586000: state=Stopped err=<nil>
	W0318 04:16:54.938089   14479 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:16:54.946398   14479 out.go:177] * Restarting existing qemu2 VM for "no-preload-586000" ...
	I0318 04:16:54.950384   14479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:14:80:7a:d2:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:54.950705   14479 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:16:54.952760   14479 main.go:141] libmachine: STDOUT: 
	I0318 04:16:54.952809   14479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:54.952848   14479 fix.go:56] duration metric: took 14.8875ms for fixHost
	I0318 04:16:54.952853   14479 start.go:83] releasing machines lock for "no-preload-586000", held for 14.905542ms
	W0318 04:16:54.952861   14479 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:54.952894   14479 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:54.952899   14479 start.go:728] Will try again in 5 seconds ...
	I0318 04:16:56.845314   14479 cache.go:162] opening:  /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 04:16:59.952912   14479 start.go:360] acquireMachinesLock for no-preload-586000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:16:59.953305   14479 start.go:364] duration metric: took 290.083µs to acquireMachinesLock for "no-preload-586000"
	I0318 04:16:59.953466   14479 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:16:59.953496   14479 fix.go:54] fixHost starting: 
	I0318 04:16:59.954171   14479 fix.go:112] recreateIfNeeded on no-preload-586000: state=Stopped err=<nil>
	W0318 04:16:59.954199   14479 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:16:59.959685   14479 out.go:177] * Restarting existing qemu2 VM for "no-preload-586000" ...
	I0318 04:16:59.966794   14479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:14:80:7a:d2:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/no-preload-586000/disk.qcow2
	I0318 04:16:59.978522   14479 main.go:141] libmachine: STDOUT: 
	I0318 04:16:59.978608   14479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:16:59.978690   14479 fix.go:56] duration metric: took 25.195791ms for fixHost
	I0318 04:16:59.978714   14479 start.go:83] releasing machines lock for "no-preload-586000", held for 25.385167ms
	W0318 04:16:59.978887   14479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-586000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:16:59.987550   14479 out.go:177] 
	W0318 04:16:59.991651   14479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:16:59.991731   14479 out.go:239] * 
	* 
	W0318 04:16:59.995172   14479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:00.007620   14479 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-586000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (66.48025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-586000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (34.099416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-586000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-586000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-586000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.6785ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-586000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-586000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.036666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-586000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.525917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-586000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-586000 --alsologtostderr -v=1: exit status 83 (43.991208ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-586000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-586000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:00.281631   14503 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:00.282021   14503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:00.282025   14503 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:00.282027   14503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:00.282218   14503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:00.282494   14503 out.go:298] Setting JSON to false
	I0318 04:17:00.282507   14503 mustload.go:65] Loading cluster: no-preload-586000
	I0318 04:17:00.283004   14503 config.go:182] Loaded profile config "no-preload-586000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:17:00.287392   14503 out.go:177] * The control-plane node no-preload-586000 host is not running: state=Stopped
	I0318 04:17:00.291229   14503 out.go:177]   To start a cluster, run: "minikube start -p no-preload-586000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-586000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.30575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (31.383666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.887707s)

                                                
                                                
-- stdout --
	* [embed-certs-684000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-684000" primary control-plane node in "embed-certs-684000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-684000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:00.750304   14526 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:00.750442   14526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:00.750445   14526 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:00.750448   14526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:00.750574   14526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:00.751657   14526 out.go:298] Setting JSON to false
	I0318 04:17:00.768015   14526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8193,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:00.768084   14526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:00.772383   14526 out.go:177] * [embed-certs-684000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:00.780484   14526 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:00.784520   14526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:00.780518   14526 notify.go:220] Checking for updates...
	I0318 04:17:00.790426   14526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:00.793472   14526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:00.796545   14526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:00.799440   14526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:00.802886   14526 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:00.802954   14526 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:00.803000   14526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:00.807482   14526 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:17:00.814449   14526 start.go:297] selected driver: qemu2
	I0318 04:17:00.814455   14526 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:17:00.814461   14526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:00.816721   14526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:17:00.820444   14526 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:17:00.821941   14526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:17:00.821978   14526 cni.go:84] Creating CNI manager for ""
	I0318 04:17:00.821991   14526 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:00.822002   14526 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:17:00.822024   14526 start.go:340] cluster config:
	{Name:embed-certs-684000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:00.826527   14526 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:00.833503   14526 out.go:177] * Starting "embed-certs-684000" primary control-plane node in "embed-certs-684000" cluster
	I0318 04:17:00.837481   14526 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:17:00.837498   14526 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:17:00.837514   14526 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:00.837598   14526 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:00.837605   14526 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:17:00.837665   14526 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/embed-certs-684000/config.json ...
	I0318 04:17:00.837677   14526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/embed-certs-684000/config.json: {Name:mkca51c88849155c402c271b27dd792a2eec25db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:17:00.837906   14526 start.go:360] acquireMachinesLock for embed-certs-684000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:00.837941   14526 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "embed-certs-684000"
	I0318 04:17:00.837956   14526 start.go:93] Provisioning new machine with config: &{Name:embed-certs-684000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:00.837989   14526 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:00.846440   14526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:00.864290   14526 start.go:159] libmachine.API.Create for "embed-certs-684000" (driver="qemu2")
	I0318 04:17:00.864322   14526 client.go:168] LocalClient.Create starting
	I0318 04:17:00.864392   14526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:00.864429   14526 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:00.864438   14526 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:00.864487   14526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:00.864509   14526 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:00.864517   14526 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:00.864877   14526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:01.007672   14526 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:01.084638   14526 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:01.084643   14526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:01.084821   14526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:01.097278   14526 main.go:141] libmachine: STDOUT: 
	I0318 04:17:01.097295   14526 main.go:141] libmachine: STDERR: 
	I0318 04:17:01.097355   14526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2 +20000M
	I0318 04:17:01.108656   14526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:01.108674   14526 main.go:141] libmachine: STDERR: 
	I0318 04:17:01.108687   14526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:01.108691   14526 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:01.108722   14526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:10:64:64:f2:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:01.110542   14526 main.go:141] libmachine: STDOUT: 
	I0318 04:17:01.110556   14526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:01.110573   14526 client.go:171] duration metric: took 246.254667ms to LocalClient.Create
	I0318 04:17:03.112744   14526 start.go:128] duration metric: took 2.274806417s to createHost
	I0318 04:17:03.112827   14526 start.go:83] releasing machines lock for "embed-certs-684000", held for 2.274942292s
	W0318 04:17:03.112937   14526 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:03.122945   14526 out.go:177] * Deleting "embed-certs-684000" in qemu2 ...
	W0318 04:17:03.151749   14526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:03.151782   14526 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:08.153784   14526 start.go:360] acquireMachinesLock for embed-certs-684000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:08.154332   14526 start.go:364] duration metric: took 445.334µs to acquireMachinesLock for "embed-certs-684000"
	I0318 04:17:08.154524   14526 start.go:93] Provisioning new machine with config: &{Name:embed-certs-684000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:08.154790   14526 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:08.164503   14526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:08.211785   14526 start.go:159] libmachine.API.Create for "embed-certs-684000" (driver="qemu2")
	I0318 04:17:08.211864   14526 client.go:168] LocalClient.Create starting
	I0318 04:17:08.212011   14526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:08.212076   14526 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:08.212095   14526 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:08.212170   14526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:08.212212   14526 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:08.212226   14526 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:08.212724   14526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:08.363643   14526 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:08.532884   14526 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:08.532891   14526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:08.533095   14526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:08.545900   14526 main.go:141] libmachine: STDOUT: 
	I0318 04:17:08.545918   14526 main.go:141] libmachine: STDERR: 
	I0318 04:17:08.545976   14526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2 +20000M
	I0318 04:17:08.556796   14526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:08.556811   14526 main.go:141] libmachine: STDERR: 
	I0318 04:17:08.556822   14526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:08.556827   14526 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:08.556856   14526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:92:6f:ba:68:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:08.558591   14526 main.go:141] libmachine: STDOUT: 
	I0318 04:17:08.558609   14526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:08.558620   14526 client.go:171] duration metric: took 346.751333ms to LocalClient.Create
	I0318 04:17:10.560734   14526 start.go:128] duration metric: took 2.405989916s to createHost
	I0318 04:17:10.560797   14526 start.go:83] releasing machines lock for "embed-certs-684000", held for 2.406515417s
	W0318 04:17:10.561171   14526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:10.575818   14526 out.go:177] 
	W0318 04:17:10.579855   14526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:10.579884   14526 out.go:239] * 
	* 
	W0318 04:17:10.582229   14526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:10.592732   14526 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (68.34325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-684000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-684000 create -f testdata/busybox.yaml: exit status 1 (29.00325ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-684000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (31.460542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (30.946375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-684000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-684000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-684000 describe deploy/metrics-server -n kube-system: exit status 1 (26.70275ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-684000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (30.920875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.203521417s)

                                                
                                                
-- stdout --
	* [embed-certs-684000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-684000" primary control-plane node in "embed-certs-684000" cluster
	* Restarting existing qemu2 VM for "embed-certs-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:13.974584   14574 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:13.974719   14574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:13.974722   14574 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:13.974725   14574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:13.974869   14574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:13.975844   14574 out.go:298] Setting JSON to false
	I0318 04:17:13.992023   14574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8206,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:13.992095   14574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:13.996453   14574 out.go:177] * [embed-certs-684000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:14.004563   14574 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:14.004602   14574 notify.go:220] Checking for updates...
	I0318 04:17:14.012456   14574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:14.013990   14574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:14.017433   14574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:14.024467   14574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:14.031463   14574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:14.034869   14574 config.go:182] Loaded profile config "embed-certs-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:14.035143   14574 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:14.039409   14574 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:17:14.047469   14574 start.go:297] selected driver: qemu2
	I0318 04:17:14.047477   14574 start.go:901] validating driver "qemu2" against &{Name:embed-certs-684000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:14.047531   14574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:14.049977   14574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:17:14.050033   14574 cni.go:84] Creating CNI manager for ""
	I0318 04:17:14.050041   14574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:14.050072   14574 start.go:340] cluster config:
	{Name:embed-certs-684000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-684000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:14.054585   14574 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:14.058537   14574 out.go:177] * Starting "embed-certs-684000" primary control-plane node in "embed-certs-684000" cluster
	I0318 04:17:14.066473   14574 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:17:14.066494   14574 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:17:14.066505   14574 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:14.066558   14574 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:14.066563   14574 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:17:14.066627   14574 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/embed-certs-684000/config.json ...
	I0318 04:17:14.067149   14574 start.go:360] acquireMachinesLock for embed-certs-684000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:14.067175   14574 start.go:364] duration metric: took 20.334µs to acquireMachinesLock for "embed-certs-684000"
	I0318 04:17:14.067184   14574 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:14.067190   14574 fix.go:54] fixHost starting: 
	I0318 04:17:14.067301   14574 fix.go:112] recreateIfNeeded on embed-certs-684000: state=Stopped err=<nil>
	W0318 04:17:14.067309   14574 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:14.075439   14574 out.go:177] * Restarting existing qemu2 VM for "embed-certs-684000" ...
	I0318 04:17:14.083617   14574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:92:6f:ba:68:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:14.085805   14574 main.go:141] libmachine: STDOUT: 
	I0318 04:17:14.085837   14574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:14.085872   14574 fix.go:56] duration metric: took 18.681917ms for fixHost
	I0318 04:17:14.085878   14574 start.go:83] releasing machines lock for "embed-certs-684000", held for 18.699459ms
	W0318 04:17:14.085887   14574 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:14.085942   14574 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:14.085948   14574 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:19.087932   14574 start.go:360] acquireMachinesLock for embed-certs-684000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:19.088332   14574 start.go:364] duration metric: took 265.125µs to acquireMachinesLock for "embed-certs-684000"
	I0318 04:17:19.088466   14574 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:19.088487   14574 fix.go:54] fixHost starting: 
	I0318 04:17:19.089295   14574 fix.go:112] recreateIfNeeded on embed-certs-684000: state=Stopped err=<nil>
	W0318 04:17:19.089329   14574 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:19.094664   14574 out.go:177] * Restarting existing qemu2 VM for "embed-certs-684000" ...
	I0318 04:17:19.101893   14574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:92:6f:ba:68:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/embed-certs-684000/disk.qcow2
	I0318 04:17:19.111253   14574 main.go:141] libmachine: STDOUT: 
	I0318 04:17:19.111314   14574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:19.111383   14574 fix.go:56] duration metric: took 22.897709ms for fixHost
	I0318 04:17:19.111400   14574 start.go:83] releasing machines lock for "embed-certs-684000", held for 22.99625ms
	W0318 04:17:19.111572   14574 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:19.118637   14574 out.go:177] 
	W0318 04:17:19.122727   14574 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:19.122753   14574 out.go:239] * 
	* 
	W0318 04:17:19.125354   14574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:19.132684   14574 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-684000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (68.487917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-684000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (33.224459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-684000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.767208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (30.939334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-684000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (31.00575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-684000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-684000 --alsologtostderr -v=1: exit status 83 (41.389208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-684000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:19.410298   14600 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:19.410455   14600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:19.410458   14600 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:19.410461   14600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:19.410616   14600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:19.410820   14600 out.go:298] Setting JSON to false
	I0318 04:17:19.410829   14600 mustload.go:65] Loading cluster: embed-certs-684000
	I0318 04:17:19.411002   14600 config.go:182] Loaded profile config "embed-certs-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:19.414816   14600 out.go:177] * The control-plane node embed-certs-684000 host is not running: state=Stopped
	I0318 04:17:19.418665   14600 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-684000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-684000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (30.505292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (30.777833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.821421334s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-797000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-797000" primary control-plane node in "default-k8s-diff-port-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:20.107484   14635 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:20.107596   14635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:20.107600   14635 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:20.107603   14635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:20.107745   14635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:20.108844   14635 out.go:298] Setting JSON to false
	I0318 04:17:20.125013   14635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8213,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:20.125071   14635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:20.128924   14635 out.go:177] * [default-k8s-diff-port-797000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:20.134787   14635 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:20.138744   14635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:20.134845   14635 notify.go:220] Checking for updates...
	I0318 04:17:20.145726   14635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:20.148733   14635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:20.151704   14635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:20.154749   14635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:20.158105   14635 config.go:182] Loaded profile config "cert-expiration-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:20.158176   14635 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:20.158224   14635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:20.162672   14635 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:17:20.169748   14635 start.go:297] selected driver: qemu2
	I0318 04:17:20.169755   14635 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:17:20.169761   14635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:20.172142   14635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:17:20.175650   14635 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:17:20.178817   14635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:17:20.178852   14635 cni.go:84] Creating CNI manager for ""
	I0318 04:17:20.178859   14635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:20.178864   14635 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:17:20.178893   14635 start.go:340] cluster config:
	{Name:default-k8s-diff-port-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:20.183536   14635 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:20.188727   14635 out.go:177] * Starting "default-k8s-diff-port-797000" primary control-plane node in "default-k8s-diff-port-797000" cluster
	I0318 04:17:20.192707   14635 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:17:20.192740   14635 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:17:20.192758   14635 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:20.192814   14635 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:20.192826   14635 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:17:20.192892   14635 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/default-k8s-diff-port-797000/config.json ...
	I0318 04:17:20.192911   14635 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/default-k8s-diff-port-797000/config.json: {Name:mk2f328155de566f71bf57db6e9999d4b4211eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:17:20.193133   14635 start.go:360] acquireMachinesLock for default-k8s-diff-port-797000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:20.193170   14635 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "default-k8s-diff-port-797000"
	I0318 04:17:20.193184   14635 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:20.193213   14635 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:20.201729   14635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:20.220078   14635 start.go:159] libmachine.API.Create for "default-k8s-diff-port-797000" (driver="qemu2")
	I0318 04:17:20.220108   14635 client.go:168] LocalClient.Create starting
	I0318 04:17:20.220180   14635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:20.220209   14635 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:20.220221   14635 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:20.220266   14635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:20.220289   14635 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:20.220295   14635 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:20.220691   14635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:20.365821   14635 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:20.465089   14635 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:20.465097   14635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:20.465272   14635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:20.477571   14635 main.go:141] libmachine: STDOUT: 
	I0318 04:17:20.477590   14635 main.go:141] libmachine: STDERR: 
	I0318 04:17:20.477651   14635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2 +20000M
	I0318 04:17:20.489248   14635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:20.489271   14635 main.go:141] libmachine: STDERR: 
	I0318 04:17:20.489282   14635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:20.489285   14635 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:20.489312   14635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ae:27:6d:a5:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:20.491163   14635 main.go:141] libmachine: STDOUT: 
	I0318 04:17:20.491180   14635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:20.491207   14635 client.go:171] duration metric: took 271.104792ms to LocalClient.Create
	I0318 04:17:22.493458   14635 start.go:128] duration metric: took 2.300251084s to createHost
	I0318 04:17:22.493589   14635 start.go:83] releasing machines lock for "default-k8s-diff-port-797000", held for 2.30047475s
	W0318 04:17:22.493703   14635 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:22.509900   14635 out.go:177] * Deleting "default-k8s-diff-port-797000" in qemu2 ...
	W0318 04:17:22.534025   14635 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:22.534054   14635 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:27.536055   14635 start.go:360] acquireMachinesLock for default-k8s-diff-port-797000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:27.536465   14635 start.go:364] duration metric: took 333.291µs to acquireMachinesLock for "default-k8s-diff-port-797000"
	I0318 04:17:27.536608   14635 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:27.536922   14635 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:27.546653   14635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:27.595833   14635 start.go:159] libmachine.API.Create for "default-k8s-diff-port-797000" (driver="qemu2")
	I0318 04:17:27.595873   14635 client.go:168] LocalClient.Create starting
	I0318 04:17:27.596017   14635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:27.596089   14635 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:27.596110   14635 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:27.596170   14635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:27.596212   14635 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:27.596225   14635 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:27.597126   14635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:27.753027   14635 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:27.828662   14635 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:27.828668   14635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:27.828868   14635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:27.841500   14635 main.go:141] libmachine: STDOUT: 
	I0318 04:17:27.841529   14635 main.go:141] libmachine: STDERR: 
	I0318 04:17:27.841582   14635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2 +20000M
	I0318 04:17:27.852320   14635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:27.852348   14635 main.go:141] libmachine: STDERR: 
	I0318 04:17:27.852362   14635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:27.852369   14635 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:27.852407   14635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:c3:5f:0d:c2:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:27.854143   14635 main.go:141] libmachine: STDOUT: 
	I0318 04:17:27.854171   14635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:27.854184   14635 client.go:171] duration metric: took 258.307041ms to LocalClient.Create
	I0318 04:17:29.856287   14635 start.go:128] duration metric: took 2.319408541s to createHost
	I0318 04:17:29.856411   14635 start.go:83] releasing machines lock for "default-k8s-diff-port-797000", held for 2.3199945s
	W0318 04:17:29.856769   14635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:29.870387   14635 out.go:177] 
	W0318 04:17:29.873551   14635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:29.873601   14635 out.go:239] * 
	* 
	W0318 04:17:29.876108   14635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:29.884418   14635 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (68.364583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.770075625s)

                                                
                                                
-- stdout --
	* [newest-cni-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-078000" primary control-plane node in "newest-cni-078000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-078000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:23.544071   14660 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:23.544208   14660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:23.544212   14660 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:23.544214   14660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:23.544349   14660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:23.545421   14660 out.go:298] Setting JSON to false
	I0318 04:17:23.561597   14660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8216,"bootTime":1710752427,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:23.561663   14660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:23.568018   14660 out.go:177] * [newest-cni-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:23.574943   14660 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:23.579990   14660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:23.574991   14660 notify.go:220] Checking for updates...
	I0318 04:17:23.585934   14660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:23.588947   14660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:23.591976   14660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:23.594921   14660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:23.598283   14660 config.go:182] Loaded profile config "default-k8s-diff-port-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:23.598350   14660 config.go:182] Loaded profile config "multinode-886000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:23.598401   14660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:23.602876   14660 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:17:23.609947   14660 start.go:297] selected driver: qemu2
	I0318 04:17:23.609954   14660 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:17:23.609961   14660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:23.612226   14660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 04:17:23.612248   14660 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 04:17:23.616851   14660 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:17:23.624012   14660 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 04:17:23.624056   14660 cni.go:84] Creating CNI manager for ""
	I0318 04:17:23.624065   14660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:23.624073   14660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:17:23.624107   14660 start.go:340] cluster config:
	{Name:newest-cni-078000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:23.629293   14660 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:23.636887   14660 out.go:177] * Starting "newest-cni-078000" primary control-plane node in "newest-cni-078000" cluster
	I0318 04:17:23.640751   14660 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:17:23.640776   14660 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:17:23.640795   14660 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:23.640866   14660 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:23.640873   14660 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:17:23.640944   14660 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/newest-cni-078000/config.json ...
	I0318 04:17:23.640957   14660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/newest-cni-078000/config.json: {Name:mk18c1eb217b4b6e04b9f26e0dc7aa80cb98b4b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:17:23.641209   14660 start.go:360] acquireMachinesLock for newest-cni-078000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:23.641250   14660 start.go:364] duration metric: took 34.75µs to acquireMachinesLock for "newest-cni-078000"
	I0318 04:17:23.641267   14660 start.go:93] Provisioning new machine with config: &{Name:newest-cni-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:23.641302   14660 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:23.648786   14660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:23.667231   14660 start.go:159] libmachine.API.Create for "newest-cni-078000" (driver="qemu2")
	I0318 04:17:23.667260   14660 client.go:168] LocalClient.Create starting
	I0318 04:17:23.667321   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:23.667352   14660 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:23.667362   14660 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:23.667411   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:23.667435   14660 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:23.667440   14660 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:23.667827   14660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:23.808410   14660 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:23.864463   14660 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:23.864468   14660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:23.864636   14660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:23.876775   14660 main.go:141] libmachine: STDOUT: 
	I0318 04:17:23.876797   14660 main.go:141] libmachine: STDERR: 
	I0318 04:17:23.876843   14660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2 +20000M
	I0318 04:17:23.887338   14660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:23.887353   14660 main.go:141] libmachine: STDERR: 
	I0318 04:17:23.887362   14660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:23.887367   14660 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:23.887398   14660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:38:55:0a:1c:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:23.889095   14660 main.go:141] libmachine: STDOUT: 
	I0318 04:17:23.889112   14660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:23.889129   14660 client.go:171] duration metric: took 221.872541ms to LocalClient.Create
	I0318 04:17:25.891262   14660 start.go:128] duration metric: took 2.250008417s to createHost
	I0318 04:17:25.891346   14660 start.go:83] releasing machines lock for "newest-cni-078000", held for 2.250160666s
	W0318 04:17:25.891407   14660 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:25.904659   14660 out.go:177] * Deleting "newest-cni-078000" in qemu2 ...
	W0318 04:17:25.933253   14660 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:25.933287   14660 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:30.935366   14660 start.go:360] acquireMachinesLock for newest-cni-078000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:30.935751   14660 start.go:364] duration metric: took 256.209µs to acquireMachinesLock for "newest-cni-078000"
	I0318 04:17:30.935891   14660 start.go:93] Provisioning new machine with config: &{Name:newest-cni-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:17:30.936274   14660 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:17:30.945914   14660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:17:30.994248   14660 start.go:159] libmachine.API.Create for "newest-cni-078000" (driver="qemu2")
	I0318 04:17:30.994298   14660 client.go:168] LocalClient.Create starting
	I0318 04:17:30.994395   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/ca.pem
	I0318 04:17:30.994445   14660 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:30.994464   14660 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:30.994520   14660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18431-10628/.minikube/certs/cert.pem
	I0318 04:17:30.994546   14660 main.go:141] libmachine: Decoding PEM data...
	I0318 04:17:30.994565   14660 main.go:141] libmachine: Parsing certificate...
	I0318 04:17:30.995251   14660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:17:31.147313   14660 main.go:141] libmachine: Creating SSH key...
	I0318 04:17:31.211922   14660 main.go:141] libmachine: Creating Disk image...
	I0318 04:17:31.211928   14660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:17:31.212104   14660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2.raw /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:31.224172   14660 main.go:141] libmachine: STDOUT: 
	I0318 04:17:31.224201   14660 main.go:141] libmachine: STDERR: 
	I0318 04:17:31.224253   14660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2 +20000M
	I0318 04:17:31.234748   14660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:17:31.234766   14660 main.go:141] libmachine: STDERR: 
	I0318 04:17:31.234775   14660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:31.234780   14660 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:17:31.234826   14660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:bd:ff:76:25:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:31.236455   14660 main.go:141] libmachine: STDOUT: 
	I0318 04:17:31.236477   14660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:31.236490   14660 client.go:171] duration metric: took 242.193541ms to LocalClient.Create
	I0318 04:17:33.238612   14660 start.go:128] duration metric: took 2.302352292s to createHost
	I0318 04:17:33.238694   14660 start.go:83] releasing machines lock for "newest-cni-078000", held for 2.302995s
	W0318 04:17:33.239026   14660 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-078000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-078000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:33.251930   14660 out.go:177] 
	W0318 04:17:33.255853   14660 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:33.255883   14660 out.go:239] * 
	* 
	W0318 04:17:33.259522   14660 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:33.269751   14660 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (67.244959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-797000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-797000 create -f testdata/busybox.yaml: exit status 1 (29.476084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-797000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (31.552417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (31.219291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-797000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-797000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-797000 describe deploy/metrics-server -n kube-system: exit status 1 (26.933792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-797000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (31.143083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.18584825s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-797000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-797000" primary control-plane node in "default-k8s-diff-port-797000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:34.009256   14722 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:34.009384   14722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:34.009387   14722 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:34.009389   14722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:34.009510   14722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:34.010486   14722 out.go:298] Setting JSON to false
	I0318 04:17:34.026511   14722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8227,"bootTime":1710752427,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:34.026573   14722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:34.031685   14722 out.go:177] * [default-k8s-diff-port-797000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:34.038586   14722 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:34.042697   14722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:34.038638   14722 notify.go:220] Checking for updates...
	I0318 04:17:34.046722   14722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:34.049606   14722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:34.052653   14722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:34.055728   14722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:34.057519   14722 config.go:182] Loaded profile config "default-k8s-diff-port-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:34.057774   14722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:34.061713   14722 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:17:34.068567   14722 start.go:297] selected driver: qemu2
	I0318 04:17:34.068573   14722 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:34.068633   14722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:34.070932   14722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:17:34.070983   14722 cni.go:84] Creating CNI manager for ""
	I0318 04:17:34.070990   14722 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:34.071015   14722 start.go:340] cluster config:
	{Name:default-k8s-diff-port-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-797000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:34.075401   14722 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:34.083714   14722 out.go:177] * Starting "default-k8s-diff-port-797000" primary control-plane node in "default-k8s-diff-port-797000" cluster
	I0318 04:17:34.087702   14722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:17:34.087718   14722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:17:34.087734   14722 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:34.087784   14722 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:34.087790   14722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:17:34.087867   14722 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/default-k8s-diff-port-797000/config.json ...
	I0318 04:17:34.088343   14722 start.go:360] acquireMachinesLock for default-k8s-diff-port-797000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:34.088370   14722 start.go:364] duration metric: took 20.375µs to acquireMachinesLock for "default-k8s-diff-port-797000"
	I0318 04:17:34.088379   14722 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:34.088384   14722 fix.go:54] fixHost starting: 
	I0318 04:17:34.088509   14722 fix.go:112] recreateIfNeeded on default-k8s-diff-port-797000: state=Stopped err=<nil>
	W0318 04:17:34.088518   14722 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:34.092557   14722 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-797000" ...
	I0318 04:17:34.100710   14722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:c3:5f:0d:c2:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:34.102631   14722 main.go:141] libmachine: STDOUT: 
	I0318 04:17:34.102656   14722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:34.102683   14722 fix.go:56] duration metric: took 14.300625ms for fixHost
	I0318 04:17:34.102687   14722 start.go:83] releasing machines lock for "default-k8s-diff-port-797000", held for 14.313833ms
	W0318 04:17:34.102694   14722 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:34.102722   14722 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:34.102727   14722 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:39.104723   14722 start.go:360] acquireMachinesLock for default-k8s-diff-port-797000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:39.105103   14722 start.go:364] duration metric: took 272.167µs to acquireMachinesLock for "default-k8s-diff-port-797000"
	I0318 04:17:39.105175   14722 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:39.105194   14722 fix.go:54] fixHost starting: 
	I0318 04:17:39.105894   14722 fix.go:112] recreateIfNeeded on default-k8s-diff-port-797000: state=Stopped err=<nil>
	W0318 04:17:39.105921   14722 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:39.114282   14722 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-797000" ...
	I0318 04:17:39.118468   14722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:c3:5f:0d:c2:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/default-k8s-diff-port-797000/disk.qcow2
	I0318 04:17:39.128511   14722 main.go:141] libmachine: STDOUT: 
	I0318 04:17:39.128576   14722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:39.128655   14722 fix.go:56] duration metric: took 23.463209ms for fixHost
	I0318 04:17:39.128672   14722 start.go:83] releasing machines lock for "default-k8s-diff-port-797000", held for 23.548916ms
	W0318 04:17:39.128862   14722 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:39.136283   14722 out.go:177] 
	W0318 04:17:39.140434   14722 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:39.140535   14722 out.go:239] * 
	* 
	W0318 04:17:39.143317   14722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:39.151247   14722 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-797000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (68.610958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.188732708s)

                                                
                                                
-- stdout --
	* [newest-cni-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-078000" primary control-plane node in "newest-cni-078000" cluster
	* Restarting existing qemu2 VM for "newest-cni-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:37.053646   14748 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:37.053772   14748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:37.053775   14748 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:37.053778   14748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:37.053919   14748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:37.054951   14748 out.go:298] Setting JSON to false
	I0318 04:17:37.070932   14748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8230,"bootTime":1710752427,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:17:37.070997   14748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:17:37.074880   14748 out.go:177] * [newest-cni-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:17:37.081792   14748 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 04:17:37.085815   14748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 04:17:37.081848   14748 notify.go:220] Checking for updates...
	I0318 04:17:37.091759   14748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:17:37.094781   14748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:17:37.097672   14748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 04:17:37.100735   14748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:17:37.104157   14748 config.go:182] Loaded profile config "newest-cni-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:17:37.104440   14748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:17:37.107711   14748 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:17:37.114779   14748 start.go:297] selected driver: qemu2
	I0318 04:17:37.114785   14748 start.go:901] validating driver "qemu2" against &{Name:newest-cni-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:37.114842   14748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:17:37.117130   14748 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 04:17:37.117170   14748 cni.go:84] Creating CNI manager for ""
	I0318 04:17:37.117177   14748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:17:37.117199   14748 start.go:340] cluster config:
	{Name:newest-cni-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-078000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:17:37.121599   14748 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:17:37.128788   14748 out.go:177] * Starting "newest-cni-078000" primary control-plane node in "newest-cni-078000" cluster
	I0318 04:17:37.132553   14748 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:17:37.132569   14748 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:17:37.132584   14748 cache.go:56] Caching tarball of preloaded images
	I0318 04:17:37.132649   14748 preload.go:173] Found /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:17:37.132661   14748 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:17:37.132728   14748 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/newest-cni-078000/config.json ...
	I0318 04:17:37.133212   14748 start.go:360] acquireMachinesLock for newest-cni-078000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:37.133240   14748 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "newest-cni-078000"
	I0318 04:17:37.133250   14748 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:37.133255   14748 fix.go:54] fixHost starting: 
	I0318 04:17:37.133384   14748 fix.go:112] recreateIfNeeded on newest-cni-078000: state=Stopped err=<nil>
	W0318 04:17:37.133393   14748 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:37.136809   14748 out.go:177] * Restarting existing qemu2 VM for "newest-cni-078000" ...
	I0318 04:17:37.144769   14748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:bd:ff:76:25:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:37.146852   14748 main.go:141] libmachine: STDOUT: 
	I0318 04:17:37.146876   14748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:37.146904   14748 fix.go:56] duration metric: took 13.649292ms for fixHost
	I0318 04:17:37.146909   14748 start.go:83] releasing machines lock for "newest-cni-078000", held for 13.665ms
	W0318 04:17:37.146917   14748 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:37.146949   14748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:37.146955   14748 start.go:728] Will try again in 5 seconds ...
	I0318 04:17:42.149053   14748 start.go:360] acquireMachinesLock for newest-cni-078000: {Name:mk86a7fcaaf0e58cbbbfab5f6270cf8b00af490e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:17:42.149465   14748 start.go:364] duration metric: took 305.208µs to acquireMachinesLock for "newest-cni-078000"
	I0318 04:17:42.149603   14748 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:17:42.149624   14748 fix.go:54] fixHost starting: 
	I0318 04:17:42.150366   14748 fix.go:112] recreateIfNeeded on newest-cni-078000: state=Stopped err=<nil>
	W0318 04:17:42.150393   14748 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:17:42.158745   14748 out.go:177] * Restarting existing qemu2 VM for "newest-cni-078000" ...
	I0318 04:17:42.162960   14748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:bd:ff:76:25:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18431-10628/.minikube/machines/newest-cni-078000/disk.qcow2
	I0318 04:17:42.173612   14748 main.go:141] libmachine: STDOUT: 
	I0318 04:17:42.173679   14748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:17:42.173761   14748 fix.go:56] duration metric: took 24.138834ms for fixHost
	I0318 04:17:42.173775   14748 start.go:83] releasing machines lock for "newest-cni-078000", held for 24.285083ms
	W0318 04:17:42.173978   14748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:17:42.182783   14748 out.go:177] 
	W0318 04:17:42.185943   14748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:17:42.185999   14748 out.go:239] * 
	* 
	W0318 04:17:42.188890   14748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:17:42.197852   14748 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-078000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (71.753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-797000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (33.324125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-797000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.808209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-797000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-797000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (30.562791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-797000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (31.458875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-797000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-797000 --alsologtostderr -v=1: exit status 83 (41.922709ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-797000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-797000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:39.430435   14767 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:39.430590   14767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:39.430594   14767 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:39.430596   14767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:39.430724   14767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:39.430962   14767 out.go:298] Setting JSON to false
	I0318 04:17:39.430970   14767 mustload.go:65] Loading cluster: default-k8s-diff-port-797000
	I0318 04:17:39.431149   14767 config.go:182] Loaded profile config "default-k8s-diff-port-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:17:39.435310   14767 out.go:177] * The control-plane node default-k8s-diff-port-797000 host is not running: state=Stopped
	I0318 04:17:39.439364   14767 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-797000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-797000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (31.100333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (30.551917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-078000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (31.633042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-078000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-078000 --alsologtostderr -v=1: exit status 83 (41.706667ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-078000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-078000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:17:42.391771   14797 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:17:42.391937   14797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:42.391945   14797 out.go:304] Setting ErrFile to fd 2...
	I0318 04:17:42.391947   14797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:17:42.392078   14797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 04:17:42.392296   14797 out.go:298] Setting JSON to false
	I0318 04:17:42.392304   14797 mustload.go:65] Loading cluster: newest-cni-078000
	I0318 04:17:42.392503   14797 config.go:182] Loaded profile config "newest-cni-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:17:42.395857   14797 out.go:177] * The control-plane node newest-cni-078000 host is not running: state=Stopped
	I0318 04:17:42.399760   14797 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-078000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-078000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (31.157333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-078000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (31.450375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.26
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 21.74
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.29.0-rc.2/json-events 24.83
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.19
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 9.59
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.13
64 TestFunctional/serial/CacheCmd/cache/add_local 1.16
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.25
81 TestFunctional/parallel/InternationalLanguage 0.12
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.29
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
116 TestFunctional/parallel/ProfileCmd/profile_list 0.11
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
121 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/ImageCommands/Setup 5.39
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.6
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
247 TestStoppedBinaryUpgrade/Setup 5.03
258 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
265 TestNoKubernetes/serial/ProfileList 15.93
267 TestNoKubernetes/serial/Stop 3.82
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
284 TestStartStop/group/old-k8s-version/serial/Stop 3.72
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
295 TestStartStop/group/no-preload/serial/Stop 1.86
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
306 TestStartStop/group/embed-certs/serial/Stop 2.93
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.67
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 3.49
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-983000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-983000: exit status 85 (99.895292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |          |
	|         | -p download-only-983000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 03:49:29
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 03:49:29.696575   11037 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:49:29.696737   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:49:29.696740   11037 out.go:304] Setting ErrFile to fd 2...
	I0318 03:49:29.696743   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:49:29.696863   11037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	W0318 03:49:29.696955   11037 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18431-10628/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18431-10628/.minikube/config/config.json: no such file or directory
	I0318 03:49:29.698269   11037 out.go:298] Setting JSON to true
	I0318 03:49:29.716024   11037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6542,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:49:29.716087   11037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:49:29.721243   11037 out.go:97] [download-only-983000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:49:29.726257   11037 out.go:169] MINIKUBE_LOCATION=18431
	I0318 03:49:29.721383   11037 notify.go:220] Checking for updates...
	W0318 03:49:29.721415   11037 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 03:49:29.733185   11037 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:49:29.736188   11037 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:49:29.739256   11037 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:49:29.742205   11037 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	W0318 03:49:29.748197   11037 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 03:49:29.748429   11037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:49:29.749903   11037 out.go:97] Using the qemu2 driver based on user configuration
	I0318 03:49:29.749921   11037 start.go:297] selected driver: qemu2
	I0318 03:49:29.749926   11037 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:49:29.749985   11037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:49:29.753219   11037 out.go:169] Automatically selected the socket_vmnet network
	I0318 03:49:29.758451   11037 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 03:49:29.758548   11037 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 03:49:29.758634   11037 cni.go:84] Creating CNI manager for ""
	I0318 03:49:29.758651   11037 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 03:49:29.758710   11037 start.go:340] cluster config:
	{Name:download-only-983000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:49:29.763357   11037 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:49:29.766158   11037 out.go:97] Downloading VM boot image ...
	I0318 03:49:29.766171   11037 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 03:49:47.597446   11037 out.go:97] Starting "download-only-983000" primary control-plane node in "download-only-983000" cluster
	I0318 03:49:47.597470   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:49:47.880699   11037 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 03:49:47.880837   11037 cache.go:56] Caching tarball of preloaded images
	I0318 03:49:47.881533   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:49:47.887052   11037 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 03:49:47.887082   11037 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:49:48.473563   11037 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 03:50:07.641124   11037 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:07.641310   11037 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:08.342042   11037 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 03:50:08.342246   11037 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-983000/config.json ...
	I0318 03:50:08.342264   11037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-983000/config.json: {Name:mk2f92b12e91f46ec3ec4150e042456020d39c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:50:08.343689   11037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 03:50:08.344054   11037 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 03:50:08.957433   11037 out.go:169] 
	W0318 03:50:08.962540   11037 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520 0x104387520] Decompressors:map[bz2:0x14000651510 gz:0x14000651518 tar:0x14000651360 tar.bz2:0x14000651370 tar.gz:0x14000651380 tar.xz:0x140006513a0 tar.zst:0x140006513b0 tbz2:0x14000651370 tgz:0x14000651380 txz:0x140006513a0 tzst:0x140006513b0 xz:0x14000651520 zip:0x14000651530 zst:0x14000651528] Getters:map[file:0x140024a4770 http:0x140006dc230 https:0x140006dc280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 03:50:08.962564   11037 out_reason.go:110] 
	W0318 03:50:08.969457   11037 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 03:50:08.973457   11037 out.go:169] 
	
	
	* The control-plane node download-only-983000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-983000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-983000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (21.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (21.737519292s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (21.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-875000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-875000: exit status 85 (80.309875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |                     |
	|         | -p download-only-983000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-983000        | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | -o=json --download-only        | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | -p download-only-875000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 03:50:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 03:50:09.670104   11074 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:50:09.670463   11074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:09.670467   11074 out.go:304] Setting ErrFile to fd 2...
	I0318 03:50:09.670470   11074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:09.670663   11074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:50:09.672091   11074 out.go:298] Setting JSON to true
	I0318 03:50:09.688439   11074 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6582,"bootTime":1710752427,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:50:09.688504   11074 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:50:09.693030   11074 out.go:97] [download-only-875000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:50:09.696958   11074 out.go:169] MINIKUBE_LOCATION=18431
	I0318 03:50:09.693134   11074 notify.go:220] Checking for updates...
	I0318 03:50:09.703795   11074 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:50:09.706963   11074 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:50:09.710015   11074 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:50:09.713018   11074 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	W0318 03:50:09.718927   11074 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 03:50:09.719140   11074 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:50:09.721953   11074 out.go:97] Using the qemu2 driver based on user configuration
	I0318 03:50:09.721963   11074 start.go:297] selected driver: qemu2
	I0318 03:50:09.721967   11074 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:50:09.722020   11074 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:50:09.724943   11074 out.go:169] Automatically selected the socket_vmnet network
	I0318 03:50:09.730086   11074 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 03:50:09.730189   11074 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 03:50:09.730224   11074 cni.go:84] Creating CNI manager for ""
	I0318 03:50:09.730232   11074 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 03:50:09.730239   11074 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 03:50:09.730288   11074 start.go:340] cluster config:
	{Name:download-only-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:50:09.734738   11074 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:50:09.737966   11074 out.go:97] Starting "download-only-875000" primary control-plane node in "download-only-875000" cluster
	I0318 03:50:09.737973   11074 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:50:10.403132   11074 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:50:10.403193   11074 cache.go:56] Caching tarball of preloaded images
	I0318 03:50:10.404483   11074 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 03:50:10.409891   11074 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 03:50:10.409926   11074 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:10.993471   11074 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 03:50:29.197027   11074 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:29.197196   11074 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-875000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-875000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-875000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (24.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-053000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-053000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (24.834450125s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (24.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-053000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-053000: exit status 85 (80.145084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:49 PDT |                     |
	|         | -p download-only-983000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-983000           | download-only-983000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | -o=json --download-only           | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | -p download-only-875000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| delete  | -p download-only-875000           | download-only-875000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT | 18 Mar 24 03:50 PDT |
	| start   | -o=json --download-only           | download-only-053000 | jenkins | v1.32.0 | 18 Mar 24 03:50 PDT |                     |
	|         | -p download-only-053000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 03:50:31
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 03:50:31.942696   11116 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:50:31.942822   11116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:31.942827   11116 out.go:304] Setting ErrFile to fd 2...
	I0318 03:50:31.942829   11116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:50:31.942941   11116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:50:31.944038   11116 out.go:298] Setting JSON to true
	I0318 03:50:31.960401   11116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6604,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:50:31.960473   11116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:50:31.965148   11116 out.go:97] [download-only-053000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:50:31.969080   11116 out.go:169] MINIKUBE_LOCATION=18431
	I0318 03:50:31.965257   11116 notify.go:220] Checking for updates...
	I0318 03:50:31.977110   11116 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:50:31.980203   11116 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:50:31.983119   11116 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:50:31.986104   11116 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	W0318 03:50:31.992090   11116 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 03:50:31.992268   11116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:50:31.995083   11116 out.go:97] Using the qemu2 driver based on user configuration
	I0318 03:50:31.995091   11116 start.go:297] selected driver: qemu2
	I0318 03:50:31.995095   11116 start.go:901] validating driver "qemu2" against <nil>
	I0318 03:50:31.995138   11116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 03:50:31.998108   11116 out.go:169] Automatically selected the socket_vmnet network
	I0318 03:50:32.003327   11116 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 03:50:32.003429   11116 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 03:50:32.003474   11116 cni.go:84] Creating CNI manager for ""
	I0318 03:50:32.003482   11116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 03:50:32.003488   11116 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 03:50:32.003532   11116 start.go:340] cluster config:
	{Name:download-only-053000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-053000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:50:32.007967   11116 iso.go:125] acquiring lock: {Name:mka84f0a9cf706ea87958c9c588a34c3b4e806fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 03:50:32.011293   11116 out.go:97] Starting "download-only-053000" primary control-plane node in "download-only-053000" cluster
	I0318 03:50:32.011308   11116 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 03:50:32.666386   11116 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 03:50:32.666463   11116 cache.go:56] Caching tarball of preloaded images
	I0318 03:50:32.667240   11116 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 03:50:32.672765   11116 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 03:50:32.672800   11116 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:33.357996   11116 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 03:50:49.077758   11116 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:49.077910   11116 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 03:50:49.634312   11116 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 03:50:49.634508   11116 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-053000/config.json ...
	I0318 03:50:49.634526   11116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18431-10628/.minikube/profiles/download-only-053000/config.json: {Name:mk05b640bd2e294818752c5cfa409a5bea0fe095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 03:50:49.634796   11116 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 03:50:49.635576   11116 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18431-10628/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-053000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-053000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-053000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-389000 --alsologtostderr --binary-mirror http://127.0.0.1:52076 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-389000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-389000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-852000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-852000: exit status 85 (60.407625ms)

                                                
                                                
-- stdout --
	* Profile "addons-852000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-852000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-852000: exit status 85 (63.106042ms)

                                                
                                                
-- stdout --
	* Profile "addons-852000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-852000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.19s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status: exit status 7 (32.72225ms)

                                                
                                                
-- stdout --
	nospam-289000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status: exit status 7 (32.206416ms)

                                                
                                                
-- stdout --
	nospam-289000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status: exit status 7 (31.842458ms)

                                                
                                                
-- stdout --
	nospam-289000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause: exit status 83 (40.526709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause: exit status 83 (41.842083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause: exit status 83 (46.547209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause: exit status 83 (42.94275ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause: exit status 83 (41.895833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause: exit status 83 (41.83125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-289000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (9.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop: (3.59295275s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop: (3.031325791s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-289000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-289000 stop: (2.962475666s)
--- PASS: TestErrorSpam/stop (9.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18431-10628/.minikube/files/etc/test/nested/copy/11035/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:3.1: (2.129556666s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:3.3: (2.168121458s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-162000 cache add registry.k8s.io/pause:latest: (1.83031s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1242312821/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache add minikube-local-cache-test:functional-162000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 cache delete minikube-local-cache-test:functional-162000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-162000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 config get cpus: exit status 14 (33.26925ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 config get cpus: exit status 14 (40.206584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.712542ms)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:52:35.306170   11634 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:52:35.306314   11634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:35.306318   11634 out.go:304] Setting ErrFile to fd 2...
	I0318 03:52:35.306320   11634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:35.306440   11634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:52:35.307415   11634 out.go:298] Setting JSON to false
	I0318 03:52:35.323400   11634 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6728,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:52:35.323477   11634 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:52:35.328785   11634 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 03:52:35.335777   11634 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:52:35.339776   11634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:52:35.335807   11634 notify.go:220] Checking for updates...
	I0318 03:52:35.343799   11634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:52:35.347776   11634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:52:35.350787   11634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:52:35.353820   11634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:52:35.357104   11634 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:52:35.357345   11634 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:52:35.361827   11634 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 03:52:35.368792   11634 start.go:297] selected driver: qemu2
	I0318 03:52:35.368797   11634 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:52:35.368874   11634 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:52:35.374820   11634 out.go:177] 
	W0318 03:52:35.378795   11634 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 03:52:35.382747   11634 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.107291ms)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 03:52:35.182976   11630 out.go:291] Setting OutFile to fd 1 ...
	I0318 03:52:35.183086   11630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:35.183089   11630 out.go:304] Setting ErrFile to fd 2...
	I0318 03:52:35.183091   11630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 03:52:35.183219   11630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18431-10628/.minikube/bin
	I0318 03:52:35.184669   11630 out.go:298] Setting JSON to false
	I0318 03:52:35.201443   11630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6728,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 03:52:35.201516   11630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 03:52:35.206860   11630 out.go:177] * [functional-162000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0318 03:52:35.213859   11630 out.go:177]   - MINIKUBE_LOCATION=18431
	I0318 03:52:35.216837   11630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	I0318 03:52:35.213921   11630 notify.go:220] Checking for updates...
	I0318 03:52:35.224810   11630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 03:52:35.227842   11630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 03:52:35.230823   11630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	I0318 03:52:35.233805   11630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 03:52:35.237197   11630 config.go:182] Loaded profile config "functional-162000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 03:52:35.237447   11630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 03:52:35.241719   11630 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0318 03:52:35.248821   11630 start.go:297] selected driver: qemu2
	I0318 03:52:35.248828   11630 start.go:901] validating driver "qemu2" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 03:52:35.248887   11630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 03:52:35.255780   11630 out.go:177] 
	W0318 03:52:35.259845   11630 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 03:52:35.263785   11630 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.290117s)
--- PASS: TestFunctional/parallel/License (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "73.3375ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.978833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "73.373125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.6765ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.3486295s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image rm gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-162000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 image save --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014129s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-162000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-162000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-162000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-165000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-165000 --output=json --user=testUser: (3.598007333s)
--- PASS: TestJSONOutput/stop/Command (3.60s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-951000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-951000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.184666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a421c8fb-c0d6-4bbe-8841-0180cc7f5758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-951000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f994dd8c-7c7c-410f-bdf2-7d28381019b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18431"}}
	{"specversion":"1.0","id":"00ab095c-603e-48d6-b327-d714068fd300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig"}}
	{"specversion":"1.0","id":"389e6e06-4e69-4ec0-874c-a23afd4507ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c924499e-6399-435d-a85e-18f3f7074232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"932ce50d-37e6-4b5c-8dd5-fe91e76ca2f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube"}}
	{"specversion":"1.0","id":"e5184afe-7aae-402b-b2eb-3bd3148a5e7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"85e0f18c-8e27-4124-baae-f5827775173e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-951000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-951000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-484000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-112000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.83575ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-112000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18431
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18431-10628/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-112000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-112000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.372667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-112000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-112000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.811984333s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-112000
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18431
- KUBECONFIG=/Users/jenkins/minikube-integration/18431-10628/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2238671149/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-112000: (3.815211083s)
--- PASS: TestNoKubernetes/serial/Stop (3.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-112000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-112000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.138041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-112000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-112000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-492000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-492000 --alsologtostderr -v=3: (3.715194833s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-492000 -n old-k8s-version-492000: exit status 7 (63.730416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-492000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-586000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-586000 --alsologtostderr -v=3: (1.85712475s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-586000 -n no-preload-586000: exit status 7 (56.909208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-586000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-684000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-684000 --alsologtostderr -v=3: (2.928693875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-684000 -n embed-certs-684000: exit status 7 (61.087458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-684000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-797000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-797000 --alsologtostderr -v=3: (3.673257375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-078000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-078000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-078000 --alsologtostderr -v=3: (3.489371625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-797000 -n default-k8s-diff-port-797000: exit status 7 (58.680958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-797000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-078000 -n newest-cni-078000: exit status 7 (59.422833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-078000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1150592400/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710759120273487000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1150592400/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710759120273487000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1150592400/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710759120273487000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1150592400/001/test-1710759120273487000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.1105ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.093458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.0835ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.264333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.450708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.081042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.128625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo umount -f /mount-9p": exit status 83 (50.335334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1150592400/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1252212987/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.6295ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.471833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.755333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.51575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.26625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.31825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.294ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.553709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "sudo umount -f /mount-9p": exit status 83 (50.606708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-162000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1252212987/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (88.440125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (91.618375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (87.71425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (87.928875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (90.15525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (88.511041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 83 (89.06725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-162000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-162000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-162000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1934022978/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.15s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-466000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-466000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-466000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-466000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-466000"

                                                
                                                
----------------------- debugLogs end: cilium-466000 [took: 2.292838542s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-466000
--- SKIP: TestNetworkPlugins/group/cilium (2.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-921000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard